var/home/core/zuul-output/0000755000175000017500000000000015134612515014530 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015134632205015472 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000350030215134632104020250 0ustar corecoreD4sikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD ~ i.߷;U/;?FެxۻfW޾n^/ixK|1Ool_~yyiw|zxV^֯Li.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(^$0^@hH9%!40Jm>*Kdg?y7|&#)3+o,2s%R>!%*XC7Ln* wCƕH#FLzsѹ Xߛk׹1{,wŻ4v+(n^RϚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMkmi+2z5iݸ6C~z+_Ex$\}*9h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacm_jY-y`)͐o΁GWo(C U ?}aK+d&?>Y;ufʕ"uZ0EyT0: =XVy#iEW&q]#v0nFNV-9JrdK\D2s&[#bE(mV9ىN囋{V5e1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /WHc&)_`i=į`PÝr JovJw`纪}PSSii4wT (Dnm_`c46A>hPr0ιӦ q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^WrN_Ŏ6W>Bߔ)bQ) <4G0 C.iTEZ{(¥:-³xlՐ0A_Fݗw)(c>bugbǎ\J;tf*H7(?PЃkLM)}?=XkLd. yK>"dgӦ{ qke5@eTR BgT9(TڢKBEV*DDQ$3gFfThmIjh}iL;R:7A}Ss8ҧ ΁weor(Ё^g׬JyU{v3Fxlţ@U5$&~ay\CJ68?%tS KK3,87'T`ɻaNhIcn#T[2XDRcm0TJ#r)٧4!)'qϷכrTMiHe1[7c(+!C[KԹҤ 0q;;xG'ʐƭ5J; 6M^ CL3EQXy0Hy[``Xm635o,j&X}6$=}0vJ{*.Jw *nacԇ&~hb[nӉ>'݌6od NN&DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1/ z/}KXg%q3Ifq CXReQP2$TbgK ء#AZ9 K>UHkZ;oﴍ8MEDa3[p1>m`XYB[9% E*:`cBCIqC(1&b f]fNhdQvݸCVA/P_]F@?qr7@sON_}ۿ릶ytoyמseQv^sP3.sP1'Ns}d_ս=f1Jid % Jwe`40^|ǜd]z dJR-Дxq4lZ,Z[|e 'Ƙ$b2JOh k[b>¾h[;:>OM=y)֖[Sm5*_?$cjf `~ߛUIOvl/.4`P{d056 %w ^?sʫ"nK)D}O >%9r}1j#e[tRQ9*ء !ǨLJ- upƜ/4cY\[|Xs;ɾ7-<S1wg y &SL9qk;NP> ,wդjtah-j:_[;4Wg_0K>є0vNۈ/ze={< 1;/STcD,ڙ`[3XPo0TXx ZYޏ=S-ܑ2ƹڞ7կZ8m1`qAewQT*:ÊxtŨ!u}$K6tem@t):êtx: `)L`m GƂ%k1羨(zv:U!2`cV, lNdV5m$/KFS#0gLwNO6¨h}'XvوPkWn}/7d*1q* c0.$\+XND]P*84[߷Q뽃J޸8iD WPC49 *#LC ءzCwS%'m'3ܚ|otoʉ!9:PZ"ρ5M^kVځIX%G^{;+Fi7Z(ZN~;MM/u2}ݼPݫedKAd#[ BeMP6" YǨ 0vyv?7R F"}8&q]ows!Z!C4g*8n]rMQ ;N>Sr??Ӽ]\+hSQזLwfm#Y~!%rpWMEWMjbn(ek~iQ)à/2,?O %VO"d.wEр%}5zWˬQOS)ZbF p$^(2JцQImuzhpyXڈ2ͤh}/[g1ieQ*-=hiך5J))?' c9*%WyΈ W\Of[=߰+ednU$YD',jߎW&7DXǜߍG`DbE#0Y4&|޻xѷ\;_Z^sнM\&+1gWo'Y;l>V ̍"ޛ4tO,{=hFѓ$b =D(zn;Y<1x~SJ^{vn 9 j1шk'L"cE=K]A(oQ۲6+ktwLzG,87^ 9H\yqū1)\(v8pHA"ΈGVp"c ?Z)hm.2;sl$瓴ӘIe~H|.Y#C^SJĽHǀeTwvy"v܅ ]?22R.lQPa ˆSܫ1z.x62%z].`Gn&*7bd+, Z`ͲH-nမ^WbPFtOfD]c9\w+ea~~{;Vm >|WAޭi`HbIãE{%&4]Iw Wjoru ݜmKnZ<X; ۢ( nx K8.|DXb +*598;w)zp:̊~;͞)6vnM!N5Cu!8Wq/`FUwWAֻ,Qu W@ Fi:K [Av*_958]a:pmQ&'ᚡmi@ zF(n&P;)_]µ!doR0`pl`~9Fk[ٺ+4Hhao-jϸ??R<lb#P-^39T|L /~p│x@Bq"M/lja\b݋af LnU*P(8W[U6WX ZoѶ^SH:K:%Qvl\b FqQI.ȨHWo;Nw$͹O$oEE-eq=.*Dp,V;(bgJ!gF)892sw*+{[or@x,))[o新#.͞.;=fc<)((b۲Eumw峛M2,V[cm,S~ AF~.2v?JNt=O7^r.@DEuU1}g$>8ac#sĢB\PIPfwJQJ;Qxm &GBf\ZA$Ba-z|A-I @x70 晪MV)m8[6-Te@`E|=U D(C{oVa*H7MQK"<O%MTTtx袥:2JޚݶKd7UZihRk71VDqiގ\<:Ѓ3"gJJčE&>&EI|I˿j2ǯɘCGOa9C1L ={fm&'^tigk$DA' elW@Tiv{ !]oBLKJO*t*\n-iȚ4`{x_z;j3Xh ׄ?xt.o:`x^d~0u$ v48 0_ | E"Hd"H`A0&dY3 ً[fctWF_hdxMUY.b=eaI3Z=᢬-'~DWc;j FRrI5%N/K;Dk rCbm7чsSW_8g{RY.~XfEߪg:smBi1 YBX4),[c^54Sg(s$sN' 88`wC3TE+A\.ԍל9 y{͝BxG&JS meT;{З>'[LR"w F05N<&AJ3DA0ʄ4(zTUWDdE3̻l^-Xw3Fɀ{B-~.h+U8 i1b8wؖ#~zQ`/L 9#Pu/<4A L<KL U(Ee'sCcq !Ȥ4΍ +aM(VldX ][T !Ȱ|HN~6y,⒊)$e{)SR#kהyϛ7^i58f4PmB8 Y{qeφvk73:1@ƛ.{f8IGv*1藺yx27M=>+VnG;\<x7v21՚H :[Γd!E'a4n?k[A׈(sob 41Y9(^SE@7`KIK`kx& V`X0,%pe_ן >hd xе"Q4SUwy x<'o_~#6$g!D$c=5ۄX[ു RzG:柺[ӏ[3frl ô ހ^2TӘUAT!94[[m۾\T)W> lv+ H\FpG)ۏjk_c51̃^cn ba-X/#=Im41NLu\9ETp^poAOO&A+mj(^>c/"ɭex^k$# $V :]PGszy".zɪ) ӓT)D:fci[*`cc&VhfFp佬)/Wdځ+ uR<$}Kr'ݔTW$md1"#mC_@:m P>DEu&ݛȘPˬ-Ő\B`xr`"F'Iٺ*DnA)yzr^!3Ír!S$,.:+d̋BʺJ#SX*8ҁW7~>oOFe-<uJQ|FZEP__gi(`0/ƍcv7go2G$ N%v$^^&Q 4AMbvvɀ1J{ڔhэK'9*W )IYO;E4z⛢79"hK{BFEmBAΛ3>IO j u߿d{=t-n3Pnef9[}=%G*9sX,¬xS&9'E&"/"ncx}"mV5tŘ:wcZ К G)]$mbXE ^ǽ8%>,0FЕ 6vAVKVCjrD25#Lrv?33Iam:xy`|Q'eű^\ơ' .gygSAixپ im41;P^azl5|JE2z=.wcMԧ ax& =`|#HQ*lS<.U׻`>ajϿ '!9MHK:9#s,jV剤C:LIeHJ"M8P,$N;a-zݸJWc :.<sR6 լ$gu4M*B(A ݖΑِ %H;S*ڳJt>$M!^*n3qESfU, Iĭb#UFJPvBgZvn aE5}~2E|=D' ܇q>8[¿yp/9Om/5|k \6xH.Z'OeCD@cq:Y~<1LٖY9# xe8g IKTQ:+Xg:*}.<M{ZH[^>m0G{ ̷hiOO|9Y"mma[sSbb'Rv&{@6; KE.a\}:<]Oyve3h9}E[kMD,5 %sO{킒 8.K?]i/`׎tp NvԻV4|<{H@#*h{Yp/E%dlh\bU:E%h@&SEK [ Ƣ xg{z%ǻViX~鮦w35QE~qp[ʕ@}ZL! Z0!A⼏q)[f &E1K3i+`JG P/EG 4 9LڑKL|`PОnG#|}qOR{Q|2_tH߫%pD?1%(@nfxOrs25rMլf{sk7݇fjӞh2HkeL'Wʿ}Ƞ%>9cSH|cEyQp 'ˢd:,v-us"Iidw>%zM@9IqrGq:&_p3õB!>9'0LL]M[lwWVR9I5YpVgtuZfG{RoZr3ٮr;wW:͋nqCRu1y=㊻Ij z[|W%q0 CJV٨3,ib{eH7 mҝ(3ɏO/̗-=OR\dIoHZ6n`R֑&#.Mv0vԬ]I˟vrK}F9X|FI#g.Gi)%!iK|o}|ֵ7!ېATJKB2Z/"BfB(gdj۸=}'),-iX'|M2roK\e5Pt:*qSH PgƉU'VKξ ,!3`˞t1Rx}fvvPXdQSg6EDT:dׁz^DjXp͇G|X5Q9K$)U?o': .,wؓaՁ_ 3]Q16ZYafuvrq^ѷQT},!H]6{Jw>%wK{)rH+"B4H7-]r}7v8|׾~Us?yWfv3>xpRҧH-EeJ~4YIozi:nq Vq8swHOzf ̙eX-4`TDGq G.tݻgq74ŠqBFf8 9Fk Afq#ϛa$!qNCJ4bnvB @W,v&- 6wCBjxk9ᤉ ,Asy3YޜZ4ΓVYf'h?kNg?҆8oC!IMo:^G10EY↘H:L@D+dˠUHs[hiҕ|֏G/G`' m5p|:9U8PZ7Yݷ/7cs=v{lLHqyXR iE^1x5/[O6rpP40ޢE_A͝ Z5 om2p)lbp/bj_d{R\' 礅_}=\:Nb{}IStgq$<$ilb)n&  $uT{wD]2cM(%YjDktByxVl巳1~jpd1O9Á%˧Byd}gs9QNʟ. /ӦxbHHAni5(~p>/O0vEWZ nY3 cU $O,iLacoW1/W=-kqb>&IL6i}^^XpCŋ݃k-$pxbڲ&6*9mg>{rtD)wQ`pkKyt1?[ˋZ5NhfӛŮ Qu8Y4?W֫/&W˸~%pqq{% ?K~,#/0'NZ׽Kq^ėSJ6#j8GO[ PCbʍN^XS&}E9OZ]'t$=tnn&nu [}Ab4 +OLuU{0fIb/{&Ά+4*Iqv~L4Ykja?BHX!lk҃=pnU5:ZoXZlvwohbclwůd >04)6fbMPTi-"] OQO*(qco:}*CL™5,V:;[g./IBfJ9u+Q·B"Q F>wTLAtUG>)d_ 8rcM6MY 6seAU9c>X؛f~TTX):UiM2r:2{q:OڪR7s )B: ۊZlz6LHY g<cmN:85Qt0E_fITU*K&+/5q0);F74$*z[\M-~vg&?  p%nNS)`0'ϠNa%t"8U [tJf;z^>%(Ȋqm|d1ƕ[m6F+A(g 6+YtHgd/}nm.8BKę5bR!eZ`|p8"81,w:$qU>Ѫg39Yg>OF9V?SAT~:gGt $*}aQ.Zi~GXdm$UPI8>^;Y. ,BLq~z&0o- ,BLq1x9y:924~ '1rg7.,^>Y`bUvK S x1Qz~%6k=LMF1lE"} Q\E)o}\"X.2< 5FB΢u.`aJ#T+aI`"Ǒm O\B!,ZDbjKP%9k/*iǘP,:4F%_"zJRQBk Xu|`ot.GSGd oEC>N:S Px„*ޯrޔ3>Uf1w;mCj!Ѧ/V{r&ȑ|X!|)NuUibap5k_J'BKhxzTr؟˰ ]\ h! v˱>5S1px 2h{5}sRmA>d2UؖvlX܇Bz1U_#XUA6u 4.ٹ^XڋXٝ:^Izq. ٽƎDjlJٹBc5S&ڝUyy >Vc11*Y0O*HƝKA`d;ɯww"O-]J"ȜH*D낍{gؗN^raͭ̍*tPM*9bJ_ ;3IVnۡ|< a:U`{⧘pvzz0V fNǖ9dɹt^dnJna) H v_K3TM ȩEasUNf4B~gu#QH! ,/噋cgLzP9&`|8|Ldl?6o A%"CS&]_<<7U_~z5Q{Q\"߬VwխiwSmLaF] ,Uy[sjX^ޗ_-/F\\UR7򱺹...m/~q[~$;n ^}V_qؿgqϢ㖟B}mmK2/@i6k06a-`kd7`!2XY]-Zcp˿1eo@˒C~ W RKq^NKq"H~%.b@&Oec6^wqd!ĀDza5H`:&Q5 ^hI8N0[|B>+q"/[ڲ&ZA"@+9ZGu*t/I8%pO· y>Lh!*<-lo_V6Z;4AQt0ʹF5sQ 3l૎S1@:G|'1O$ +Jp[@8)la.~Lr{ 6(E|<ޭ$൘ G7qi#^еt%sx]}ũrDks^F\`%Di5~cZW8*HqO8VH5:sXӄc)?W`EN*|\v aVT0"tvd y n';]P1w]<Y1X?m&yO/D6dv .z{I>|&ׇ4ĂO4j.ԡD6Xb]e)ߚA 2J1SGpw>ٕ(WAѱ vb;pV ^WO+į18UB6VNO8Vtta*ѝ8A_:ď);KX!$$]1YINJ2]:Қi >] چ.lL t1]}r޴kӔ[R*1EK#e 15ۑdNQV8Vgl0v/ѝDҥ*죓Ha4-s>Vzk[~) xs%P ع omWeIC.3af>w O@C;'>|tL#Ljn떚F+B9V6y"UP۾u2Ja>0ta.]1-{s1`HbKV$n}Z+s O<UJ;üx@@z#J~*-| R0\jۇb>h9U|A܌%`; TnpR4޷V+vy]/ϧ+GЕ5҇t~T1=UFEnv@8cRөcp2Rcc6:$[|038D*0-)ZyT10:U[tp޴}{~Y(f 4[m6F"re5d$>;VfBs˞ޞ4cc1ۀS`*f'r[8ݝYvjҩJ;}]3-- ٕ|ZK_+]K%9t6\ +n!e߮lɹ$k'9~ J6>0E8`JDƖ|e}B~ٹ „&䒿e;]0:'N$Ȃ1L-%;덄{dɱIX liWv-&W&]2ǂv JpU1%8Gk@5^N&*ߗEW`uYCtuc .ac&xF 2侯}>tiDpU`%WiTH .Y[L'yyJn2$EL";3cMmhipEI:59sTz?]ecD-~V,.Y}r|%s5Www` q:cdʰ H`X;"}B=-/%!C`@ шv1\h):=m%V RD3Q{]qcfՅuБ:A ѹ @˿ޗ~7e3tj>Y)"\**dP=Ivc& HwMlF@a4D Oj>;V|aq>0*s%S;|:xI/ióVȦS!e^dW2vXӨ7Plp;I ]C^0l[hl"য*6 ny!袰{"./Ep]В|Ǚ8Hi7.cZ0ϕ!1Q%ƩJ4^3O{!5Z~wˌJ`.3Oaz.TMk9gk.T4G! ^O7E"`W28  Pbn7?ϕ!l>g/OEU:غA>?=CۣP8B\pup6_3XqCXz=DH9:{o qcMו8`n{j\FW+[3= !YWX Z}|>TT%9d-9Un# zLd%~gj|Lcw!ޤ&\Tru{//q]qY7߃뭽*{Q=^]rsu/*})7U6eq\^{Q=\ɝf-nn_}-򲺺VWKpx ˻ ymGCCc^ M7ۊ$W߆Ķ\cEyGd ꠇLd!>_&AJa/mg_Œ8bJM6MY(3ͯl~l8V0٨T zL{Ac:&MXA6n 8Ugݗ^s RTn|RKm;wgRCsT~Z*2@"G{yZI>UԞ = Ʃ%>HŋrwOt:7MdE Ͻq*ncZ{Fz_1|l5* ^r6vX#idDҨ8CJcI1MkCr89Cu1ʰi'<q\!*DMbLD]OvX&:H0VXD.42?V5(xp?V]x@cURDI;,ʤۧƦ;HcR4+E*oFk=heДU9o3$RKmXfx!2?Z5H2'dR<(*2nF bq7xXUᦙZ:ɣtt9vXo AO=… Rӥ1>gQ0Jg:ьiQG/iB$xπi\{ {8Wc@2r!%eXKNhheFL#t_+c/B=a>/Ë-Oxϋ+4Xxn獀Y𼸲Cرe6VFkZk^1Ӛiio OgLws^iHLJcJe˗;<Q`5)fQ<0O]dSt{^wCKfM´3-8r ~ⷃ_J<0ǧ0'B>> =t6P3_Ԙu;;an@;N/fK[ӵ%&"-!9w"m|6\i__X fv_yP4GaYZPX ~"&Jp&CoiygyB>?Y:y<_[w>Xotӵts<%oDKkT-8[ -#d![^Zްم"gGѨLa##N,ySXj̍͐Y^bPX#F,mI> UuaR/ /-3xdeRzj(eW(U?33#=1crm0ЏEdeَ|f8ܱ@x184|f<O3 y g]4|=(QNo"HJTe*C΂$o̫=~z>/ĵ&k O۷<]M[I&4GL@ @|ⵈ^LHIKbj!'&(e'BL kR"l;-ɹE]ݍdXj^]Vv`,|Kr5+oQgX bXjc铥XYmHS-j߄yUY21oU2'X_h:#IS`)21I /q/»?Y5YJ\4ػh%$sqpQh䕘NNߟ2C<̓%û_+Gt@wӧ3oH٘T76>7,W7Iw`:/MkrC 'ȨUV ѳ4Q^\WEB5-=Gsl05w਍sqѲ&M+>okPq_a"5MG?`xz'}oyV{M~X0(ի(JoAp ?HDuV&KGzqz ]%E]u' kb+҆߾(@i?H6IvM&a-" q-ett^AnlFȵN~^SKln9}ur(9Ku5I䄤?-m.N3,v$bLgk~ofQ#K,MQ؆6.$ߺ[?oD 9x OM亝˽O#e{么8`kO߀&@N Iť챻HAk~'ΕlT]Mߺۮ$M%`aLHqwq'$^EK,_f8 z@3**DHNLB؛Mݔ34Y2ZZ_d!aRvm;"-M啈&x, iX_4\[e.L6WЖ7}{"-\O-ċ+ZdI}|dDY{*S\z"jy)rJ )/bVDl ZSՔw?JMԸoyhEEaYSw鱨&f e%Q{hV2U]tV< ?uD4RGMJUhq I!35~Ue muа=iQ{>>^DM`N% [`9td)z %lR Ӱ̝>HEhQrװu:Q0㨬~ix.tVL%RezC-J.-WA .rn)pm=Ï4#o;#߽/9,,J3h,2 |? w%Dy&0" oѐcϬhI>Uh0xt q"vO [iy8]˛#jT-%r{e>!p,/r̘mx]fC59L߉H|J[p\8A;vOb*Xr[Ql'l9jQKtNh_ PNP }(1oϞ߽TQQh,= 1=/a_[K3z_uʓʵ+nZ]Jzɑ u;uZL^ÕdiPAhfRu )HaMnvOc&(L9d)Ew%6|ߩz.tg,B0tm*,v E=|\f*l)Y;e^M(~`#NfURCiͶ9L&/ llz#ل6Ro(=l;Nt&TicIxzF÷gstP2 GN:CkdE?Lllӏm(= 5̱]jSs1o'cHeց4 dqϹߞ bAe.!1pMuǵog`.Qs /!C.եwx0ưG;^ b!Cuec!<.u}V07t,z1ڽ%z< 9,r)C9{N>ק3ZE9Y|Z勿|$4'2~?w%4lPbMI#GHfFH3߷IQM6W}TWuԔ|t;A-_z\㢡M߽KtOiyk5CqۻUy- ݥ/si;\ ew@]ʷ^k<ޗ% eHv*+J*׶W(@;&?HV>/'mU=|ӣ"hLz|0M3qZqEz}.c 6C6I@R.0u*Ic#\jJ:;Z*{ ca$*S`wOzapKeԕGT~ 솹ZFamx uɱ=t[aF rآXȧA>|pt`ϓ9=tt.8; \jѮw1`aNGr mq[KH x5aZ}Ypӎp$ tҭFhHu/;yx-C-Mռ?%]IQKN-zRv7ZTn{WRKs/Q @i0|@^I[C H*@ xZ ?^_ نUk2)RҦD7Y9r.`8DG8E4@8X,W$۹$ rnq0ߐIހ9Ia˨xЂ bi] 3BQBFyKdeۨ菲A_Off,F܌%i:6tܻ}8 RIN=V Xwހasir1f4$jD'߀f:A~mj/7fEضlϴlfrCA,8z .IdG㚔OڌA&™ FբFvn卨D7WqJḿX9:$?HcZHjE@ ~: Z;5ln6i׳Mh(rŁ ,'HZ#2EϘdJ5k`#؜j9$ yl6-~ No&=|Q zncپjl$)qxZv;=FC ~Vۨ9Lk݃ŨA}jVc[VGnmos@]ZaHǬD`I[J L@D}~zT [;P$`Gge)fe)IE_#JfxPoĥFPQ>5{00ʗRȢyryytѭ{IE&8gvēA쥶$&S 5mǴ<K( ?ul8p,Lt]m ,nӺE"->݂X/ߌ1JX=Ehq6 KaI(,jvmՐSfr3 䭵ªVװZVaۈ}`oEJlJJ.Oft́a!7XGPj8(Y/N ;Vvx93rtzK:Imx8#>Gb ~zP6RѹP/E8&~I:΋AH(H\Q'_Xbq|!99 QFmo4C? )Zjc=Axomx|U7 MESϷxؐNME~\0da - kR'uL,)!LpADo Q9c4u)wx`4H/D}V,Q۾Y6P1ܷh'g21*ahpo-|m kNӠf_kTP:E(y|Ko$& 1j?Ղ3D9@y^R]܁ۦ?5քؙx}l}%θiQz_O 6*#^M0d1/]L8Q6:odk8'`[*0n 3~$#Ёԁ) Ӏ`dYyH] GUf*6C.ep8jҵnjLC/h{FmYp$(rmx˵ c; ?moCIȃ-g7t 7O0!pONS~;1Ag3ӥ(t97 e?9Wmi*aQ?x"QီB.i>VN+yDN.n@MO ^1x#j;v1\ަBˠKuSQf6We/'xYr6m ҾfSE*JII ˯ y[DpoU5tPtwB{Jw!teJ.sJTYN+i^W'vgj;{O.k/Sk@;HށPgPgB uG'eB e#I(ہPwPwB uG'z˄z;Nrn9NV!aN!IoE/,BzK`(x*b+$?gT(Y&k^: X{WCYMSY?1wHʔMBtN彀ОlԂ]"oq5q(# ~DgV}Zؤʻhz5e#ah;Nb>*kЖy&?q2Mic Z\MAe-CS=GՔ76%RNX) Lf[Yo:w~۸ v#4Il^eA !ܲt:6^4 |zxcGGR^9J5 PL[*x9w#4 xHW sCܖUq纔4£ XwEfj 6F̱ 8K2,hKh7ƻ0/A!w4OD?5|4Z+9e[K78-O*Z't9|ɆGfy ?¸neP oN&_\Py$+0m1@@0VB(Nq$(Ʋod}j#}!wl9 r=o 8*\ /o `#)'12 7DpdY Au9 &XХ1ӫ?@{&{nG ɂM'jcYʓ,)p6f^.x\3&xp\ra\#vu6^/ϔdaqNf D{&7Q^,'.˂ee;U\H#4BW)~H =l 30oiOC:#;9%R@A-%穄tz/vǧߜ(~}7By TM|e!bC)ԉáF[HnmFѥq.=}_oKϖXϝr 8ǸGn^E^CTLDr_)@$cXJj%ɝu {ckaB2U¿ŬFЭT XA]^ܛ0 d|=HEH E:52RƇ#!M LBK0>\<2Fq<~W$QV O 3 _,' \NyU=_ԙWT!=F[ 86-Jl2ڴ>r){N5U>_U)P*E`VYͫL%R\ VSsgl^̜"%r@yˊ4v|x(<ж*d-{ߊ%hVU~ʽDt0Z\i'`~MY* _7$}U4!}+K* bO:q^6NǕ`W~#͔e&v 0;w7qQޭA#b0Q:F=)GTEtz䍳ѣmӪKuCqT]צ)1G#Dհj,^I]}rLP/!5&Z[t>5v#m,. PWƫ#O 6 m.c/{-t~|_q7¡p*$dJ! IW3QK$\*G-QLa/5~:DcLruؠ3lKj,keٜEwm,s J*m [yzG|\O;$3Iyg0HU"789 EhtbCs u#yq Dt _j8ȯ2tt8H4q`|'z>~.!<|D /򮴷#9?3}l`A,v/ЧDhfO5EEg\Sx9kGuS ,; CN^-rF_Ϳ"߾ WŪ|ql7{2:m/U?盟_~y \nI_oyqյῺ̼'^gq: }d\q-!/~lO˛?/vWub7?'ΠZKI@e'ސ9Vk Tu37물 }..'Vh{KE^6r!]l6p 6'Wb_sZ6}lس?쿼i^Ҙut<9ѵp[W?Bo&?a$|Dў}wOtgPz'hXp]t,36 .HujJ'I#4gQV+r5,Z+TPYoVOCnI@ v(h.%,ճd*Zp`a$Ztz[2Pq/={@c8#`LB #ӝ0䮺J>+߂ #UY܇a~#яU<ĪpcIs%fEɶ U;3|O})x0e+):TL`!ySHI)8eVȝ0 WG1֢}s3)(00)k'# _v+XU(Qk06JCxfθ2Hb G8 {ZkzC%cHփJ">3%Y4UX[6!1F {֌ X{$p/ b =#U2jH3ո>8=۶(a5A Z-&Y *c$3˘+VJ8P 3WZ+At˚EM{X#=m r6UnjT,)!w5#txO ;)bDE1xuZw&9(A4!Fҙ&|-G%fL*Ndv`I μM+LSO|HbfxP$}C ih?Gg1YFY#$uch\tJW2LYΰ5υ@&qgn)(I38蛵> ޜ":"T\Q%H˺ 6q?GD* Lsژ@iH̍c"5Wr̘WXܼtRQ-ʇ4>)xRVSeYǾ'WG'lk@H"t!؜p B6_ع}F"bMT< 1ά͹x -wQQ< 4I`lU,m5hiYZ@ +j[*8X1uaV^ 8WHNRD8$ۂcGI4cs)C2g5*ʹerK:D[cĐ[dk'c78W#yEL+`6GdҢK7_JpLbb'8}  z^SxˍG3Vx<0 *j^mcVm%['F|$Mcb^j b(, U.&(PCɍ&4z20J8m Ƈpe4L +r6ǀ1(1h;9EHEMfp$꥜.4xlbE2`$Oߥ{Ip;ሚV޿f.2rTD2bG,T)-)/)KSKLjK*nY\%AY1jHN헯dثY ƾ7| b}hFkQxA W}ܖq-#ZIvC 3}k*B ~MB,@e@P!1VM,C5xN.9oꉧ63o]ecj XQ/FD3#B0c(V&! K :^2V395^Ip]>ETBz9=ډy۩C KLc;k(˙Qa{1cs)}-$Y>Ɉfz=>ŲoDcJ`Ls<}9%ќ7YFB4}J ܹpYb6>u*ӯa_ڤukP}Z(Zg~BԑR2%9$~V$/ G54Zfx)ꄎ÷JOGۛf.Dd]ʇ`eO xui#*Jg(_oʘGex۬} qpMR|ej)8n4ߑIb|W8E)&2G#JtcĐwarEECӃ n .:CbT-o$a$4ZU( Wp_ Bk,؁k-8\*r@#iߑܬFEɀy[,;Ur ClBe6sƧBd'~y*Gz-C\7G9oԬ1c`X-ǧaM5vg&K3GdKUw_"@D^S*LL̑8m֢LI<0"o9&˾cvspBtHIˊ!c y+iCIЩ`Ǹn^- AϊyBa~y E_NKx\`P[!Ö yǺ@BAp,tyL(2ƱF$ʛm̬S֍$͟yiz1n&<{׫jTRgi3KcZ 7.S\>E{$0f˸O%{)To7 =*ikn G3Cs};'lIM`̭^+JIL=)H2%Ie[9C2*gac|Fl'lR /4{=Z4qE~‡O{QV-`WMWVS/3 pw4ni8=ğ|\aZ/$6[ͽ01Fw6ZibB;{ di˕}[9"Fi6Kԫzznk&j~sVwxAW[v؝JDYRΦ_Ev56(E{=d?&&<Or< w9;j0%%$ҕK[|NJ`k%qZ>}&Qyэq^'@) Q~ -j~R3Xk_hk͆ /A)6'Ǎ Fmtw* 'ѥ]AtX{̍ t Eh"WS/QyYFԍռ2|E܅0Of@\nDZ/{[ga0ε0} cg 0\`Ğr 蛋Dϴ,_%G.]n%FbY*؛gc\|7L/^3TmveӾ+rr{M© K,ښ8yg_Uy*?ZB"_ݨ\nJɅRDz*_`@( <ݨL"hW~}z= hrv@Bn`E"P8{o Zq]`ꏦwakd3N2nQ@s:/m @ZZi=ѿ4¾U9LF l;EÌU#bӻ/fMAI|SN %գ/ZB]A<#Jы/;Q1#So2QsdRT86,4^ʗ&eh7?=Yf[޿9;uR4QXְ8[֦,JF۠ϣ3$([E$PM (z 7X5a5Fs4ky"Dp{ĥ5TAJK]hRZVĵn[tjɺ~ :ol7U "ĕE U RZQV};g jOmPcݞ)BUv :x4 h]O ۤ if "k[ojoL]؂VEhಲظa2QvYA.n_H4nє]Joj6jkbU ~8(-`0?Cq^^a<(7w@f}w_@l>_<Y-SPS+RB).]0:yPK_Ǻ6鮘"{wcav uEX#~3AL',MD0B KM_++F^aLb"V;MPb ىpYNWd]ɞfRz`P)} X#B$Q6;,(u_ 7&UdMwɽM`A'-g\m^'7wR6Gˉ1ej1F%U`zVk')(1(IbLIΪDaÕ~ZWtiCiP`TTnJ7:[W-rS i6z!5'{Ftb^1_Ҍ#Ѧݵ^6fňeKKQ#ق(r}ֱ'rVۯV8>XXyb/pT=VN|^ڴ SQ=8<{㖧]v Y>}n˨dT؎A}1 d@>/mn?x<]+ aq6O{?,1OPb4ziGղz{RfG vs]y} BUy|w.5> Օ ͼbdZDXt I87'~C1}ft]yyCzq+竑(]_t2ʷ/(uu+ gb` YP(LBŕוC YRk5c]_  -IӔ"m-GqKQTŬ AJsXhQZQsPnA2-Z$)"y=JH3jWG-xq8tRj2JXbHr2sKx֭֨[y PrCkƺ)fiT[G)r1+v [m hc`'06N,mu+u+3Q&$t$y15Wu+~[+_걝P2(#ws3 GRLqRζ؞Wc31?MK0G~? @ mX?~wr+Þ.`]dCߙaZPn#&m-[ CߎgNZ-$C6"/ ?%ە}荛 .U]F(A޶sr椚YfvW' oD %wVU.*pAnwj4.hF&ʳ~BE(׻tDc&:T_\Yr7b)Zn+omoZKL9b#QYX 02F%6 4$%<{y~cex"DŽ!5CLm2,Ô3 0))ZTH:izrFu/j*ˬ:3Z/uPT6WPZ+{*eW{k [~ېDVĊ6A&<6ب uXa n JSBC&$zq#nL6o]ڠ֡? j "&DXZI@[Z**mwH͍G[h\wCb 8\U/12 A+㮠ĥfڛmg(&dၟIX$1jq?wD;zWߗGvo\n`f;kd;) gn ݃fr= ӼO]qY?(Opw˧$Ռ91QwGE|>|9B#¤|A!>e+iXW{-8.Bz홉K{oh:6`\v6 dp;="S|tic^G(z>^tC:DOwsFXbf"<aQ'* Ig:t7GmZ>Ilܞ%a~/sZ4} FnbD6H3 mޢ> ~^WN?Ywu\qD$7ҘiP@S`RB0eVT8-SM4Txw-m=-~z& p4).pc!#JN"̒LْM)jP&䷻3cY**j|3*=A*JVgRKvZHLdR`4 b-Ol8p+XJm]P{7 DqQ=Rl>I|j-,Q2$  \VLaiA1b2Vͧc5| SJLY v#0a{Y>Xf-]SuF/k[O8hu LHCb4+;>DSQ2IZ ځRKD#^uPE (/L$B@k^%1 bЊ8o&A: ւMdk6h@Ei {N Аo!3Qp$!De0 a [oǔ JCH\Foв!VRrsJ3LLH`>nL#60/2!vnӐգa'ؚa3%7&A#?*•/ 9 =;\"Mz$2ѽW-pye&6`ъ?U]"^l3C,H`uߵ[1Bޡd2nb<\@Ҍ3aex\zn&29, &G0*MD2w&ܮnN΄2d|@^m*Ɩgk#K4dAZig=Q F=YOjb_[ -,(nJLn0d7* T/[-EƖ3BS!t\ݫXRAeW({5 %Xk#"4(B M{DF~K՝[EM"fT`@H HIMNQނk{d:T"KR)P?hrx++cʬb~΍`"Q.ebF6?]>仟we /Ff ^,0y:R;_j,H)*_ά.+YDbrH=ֽ t%_C˩Zۣ;q1/+5Sd7yoL>EYv= w|-%`w`PZ3)T.V;i Cq %@A4/?mh݀2Ng`16Ôr\8~M _}ΊbXqMsu&¬9(˸02~X[PizV&Yϋ,ǼU瞧}^*(BG(A٭J|[j*);ܖ֎ P`m\SOn*uzM_O'afpEl>SrPf[ {PdKWU_&lgٳ3Y5++s6볳pW W1tfGއ0 Ѓȿ_d:r9]b޳&yǣ>kL^|իb@MoV' dJ- @8^d eqR`zk0{9P/#5= =*aXQެFS6 \PJ~āTkMO ,: .GQ9@[sm(n?s SI@*\, k+/gf&b6]lSb6cH>b 4!Cq >޽9~f_G L?h8^N3 J]iثp9FHG^R)Q '82/p%zVA ܫ^5P5ɽ<*ԇrb /Zq fF8B`K=73n->ƫ4ɛ/[9/f!ZiBpU~|X~z ʟ^"`O)\5߾re.<;{qb>Ki6*n:?[_GdG._vҲ@0d˱t>g GzMGYHG@W_[^7j<50ml<g6]GBti?WD}'cA:8d>'IcP! -*Z>Qj DD/N}l?GfTؼP8ճgIitu[tYb|ͳ$x'rBK1ϗ!)ח;o z:/ݤ4:|1=@ׯbTfq5C_z&غ>H$_X} =K^f6fPJ))3Z,%HkyG$UJvd9lS UC[JjƮAѕ' 2[ 7$,) VӄQ$YwuhRm,%~S!ۿ{{ӇY\Sigp-?ka !+*/|ZǭjxCtԙ['H<kճtQ)O'.[WX?!߹V~mz:(ӊXJ2#+!Y$߈ q- O@=«?Maqe1@mT ʤsARLb7o<_%ɿf>F$N(҆E/iuN1/rW%NJ/u \oG|(jBc$/׬ }o7_s$뛍[iIϬ=܈^RۥVvIMw6 ZK:(,TdtYҿK$7>np}>uW54 _0`k U cJ#686HF7gxs^sXe)z,7J ch͵З0ų#q^\,"T_7]}ڐ0š9Bэ ~$OC#u1`ח}dyRV-m\<~mA~a/奲_/*o6@ͅ] nJ.[~ e//+Q lWs*gUbïNjQY[dosmfET[Qs*Ζ/kRHoWjd*l|o[giY=0r};NBw9-RMA6ƒe ,'pn Q}Ԥ QmG6haL0A[ fHX+"D4;,ν^=A.x32~CmaSV oYC'jMmh+X˰* ZGK!޵6n\OYcq[).EC{`ʲc_PDY=E-Ex=!sse`skHd"tJDĂhR1Ro =ݳ]'=3ɕ Xuf;X2'jؚ! ?Yb-1_!MoC0d6 5-E`_g)wt2^4ٔ,zp>eɢ-txZ+k3 pe?}kEmMogw+B_gK! L\V9 .)|闉Z'q5Wsjl5q5I\S./epVA߃6fg9\1^~/o!F) Ĕ;ٻ(>j|cf-o'O.M*DI=ϨZeu~:~jߴu~{X\)V߶Wk"5R%z>V1vJ,4E:'VubnvZYL6!w ]=P{]Fq-Po ; V PBa%.^oMUht,ֱ ݙWl_=})w1ѦbF/]~נM'>ۤ@삽G}.{Fݜ/2%_2H_h!m]A ,M˿0{</= e*zS=穞ц3>IlLT&y*Kdse6|*UbzfhA=ǹI@~^c~@C RۦB :o4q8-=MKH\6RILHcL2írF": C2|fqLQF9,0MR3 b! AMa:|V՗Uz9B?_JX&@jt`"$j/,bv5:/MCP5Cvyq$hUȳ/,:.m IHrAZrA4bt/.1tj}i˧rM,*gJRn0[Hy`˾bC̕`viHwh o(T* Dω}XFjYuM%sToݷoy͍*C |36YKn\6'̜\ fӇwl8y|8S 95j4>"56рlx_9 rK룥EqQNRj, u;%#CLT L=z:䒇 O /N!}HNY8@UTPdS(\(נ75&osM-W#/fپV 讳~oǨ1"f%(Ao`.ys0 ǧXMmCJ$:`ݮJn\6/|8j1ه 5: YRRP4Y k Tptt %34.k9 rrP(X,'4+ 鸖9ܒ 2Éb`VSǑԆ!B2 焍p+;ࡦ Q4'$ܠڇ0[\n<$$6`x?Z7zJn[m{mNU-bJ0C30E$AD3R[0=ڟwgY23vs GdX%+lyhpo d$0Ce %s4 /.i7w9g̣+gUA;lUA6U ]^-P.&|r~A^)9 ĕ$5X8amUҘԲV^єWE{2cxL[QD6@a^Htuv|[ޠZ(̾JEۄꗗ@L wSVrQWnt|1K }|7 :@q|鷞pG$ =IKᛱGF0&P'NE[<ڌ їvԣkZV .<_㫄@ŀnv}=, 9A[z9 9n9J1I*f\;1˜Bb@\l .&;^~"ٳˌf`o.W ?JCD4FYA R%@-kl!/ y6nnAtkE :0s]I` nt!YU x<|hX2B H(FrGZ, i͐8h7T_~lTKDak0,?/j|~qjB KC x}e `F>L" {a[/IE{B.wl{22㊲Lo Ŋ*(4r>94+Ɠ_TQ@1ęc( 3Ǥts$T`ܢ}tT%[k9K`5OEO㋈rG؄;b(8k!! E-klFMQ4Q]=nFf/WvC.0,XeE:Z0V[pKP?ь? 90}nC-|2xKm)̔ڻ{7V*㭜C fl[E-[|_މ/qRȴ1b0.FXF\KijbD`caG}gE{ a;*MVNZvl1>j6fcD,y.qN,&U@yЂ皱Т=r4_ޅDsQɨ5[[9h^N=s铃G& Qy3ígIAQڼgW +Tq*EfsJY!ke.{tBYG,Q Pf&s@,qӈJhx߽Xڦ8c:#N]"3].=U2Ee9>_d~xٵO%#]|wRg4sQkcbd N`CMP4/J\2Ag40M)Ct$5o;J9\-˭Dߖ5FK0jV;73-0S*n,=>uAZ [|%i%+J(I.E:{E-c}ݬˍzyܨ=!Acȍ _uTry$IN&C9/ϱWSdx_j53/Ϝ]ٲƈ-fl| Ez0Fr)/; zJ%ɠIc3<B:R s4K) $f[)1"c gbM'\A,G! ?{8_1i`v}@JTݶV?f)J,ے]jdf m*>_n{$G:h5%/F 0ЕX倲ЏNg3%xo2P<2*ysrPAitNЂ3y2JڿE\j [M m@_xDߋr#<8,.iu yN9,n81yknhԎ?~)lqFw2:&kAEKFN3`@KEgsp`V؄m{T]BPtU&8pUV'/c; {"RFړ4zTra.rܯ\K51΂albXШ8W(KYGr/=1vK 耉۹zglڀ [6.Sumr xU.VaG"#ˌ|ZCdw$Odw-)5ǧW_*LiF"@rܾ;^6ë\!ée> /ZZ$"TCعU$jm3~MXU9%'SeLIH;:҂jg%=ELM/iqQx:hi׺O XP3&Hۍ$nuKVӜKa9spe],Q8|,OD8gL</' +%jhJ@xOXHILHMb7 YrʄIh& 74j a/d9cmrzTEAm4B=RDt/%b>QhFB3df`9VYՖFplIJ'D$"QCw}mҮ[Y'BYd5d9lVkf } /(3/7niq=n6;5}^q;'j܇ x,ΔI%FXIao/d*;qTwvH I@19V.+\"x8.R wfIrX( e`uTKtUbi=iQl-8Vѐ4Q(yjSK}3(JUqu Dh( YNxؔ 2}X.VE*Ө{,CPx\x޼qD5 @ˊR dC-Ҩ4ހS%zw8(Gm.f}j-"4J.+-娉A#Y. RzMi$`L.V⸵涌Rfl% )cH؞`5ٴ lNL]T!3RaP1*wrV۶\mmBt(!C v]+u<^jvԒ0=b.HE| Y҂} "ܛdh-)a&eі~Fh( YxSҧ5/"%oxN 5ƷM+A 4_7eH֮xM1P3b0&Akf S}6G Bve!Gma>)Oe09MlV>GǤ ppt<[YcAt8AMu[^lyw"hp6Q=ٻm،]%m{u'J_ PSѹ4Vll;iLӇeKs-Qo:X"#ϋ=/ov xNc[φn I)>WwR:oIXZJҝך}\ͭ[y()@62 lR,&KH 㞸wy}inw/cɔV0q(Z3G̭Ȧ~V@/KWqLg?%Ln?9 4uQ8 p8|uqeӇфD1c5IO!?_l0gh<-&G&?Dִp/7gZ\MhA쨾D暓,qh>sWncu$sO7tI3T 2m\BXKs'9)؂TSk%To}z8^d~nӹ)uP}^0 *f|6ΧkX2(ʏ珳Y8?޹I)qMbbdH=O '³YO2|~2\tOs|Vl:/M1߫=nnp5U3C'uX-bMJ}?X3Ttwr3'Skڏ(;='Y=7͡K r+㡶o2_l|ozv.Z `f6);O) `" fdӆ.Vz"{Kx,[&?]Q,nnb*G/%g7_ ŷq@GMt9t8)@¾LMNH&Nlѐ&\<"*oF,'4YJ9Fuy4m aҮiRHJ\=UJsiCQ\ :''b0 V 9pO)_ {!^p,2rPPPN*-v(ά9K2I5u*=p&Ij~;= s0yib$ڔS+"M2)6UG$JXO&}roAP;) gJ*xNhdr-*=Qp5Ģ(:6F5loRQ0Aw1iNʪ;-6w񁋋pY OzHjHnŠzͫZ*FY}xO֏&lYV8pGX.XIݬYTɟC5KmE}3i73{ȋ<R#l.iDLs0`TILK‡juʬz#B Z(.DkDi5,_v@]+N16@C`%+,AXC\`c"C6P8ZSC h"\̉3Tw\HU\b6% {v5(;ΕDoKT&qN>9-k-}>P_M*z mD;6%Tư+e-8Q&7 hX `:q[mQ;C K~c\-mtFκwm)USHc.rTlLMdcinh;{-0p-0'z,U3X3?e}fBplU+IN`#,1 (%wʎ&yo; 8mV01^Rex0&9X*gV+NG ^oz#qMU4O#E.`sSabDw u/̈́x}jp۴@/ z+M41إY K*#d$i"9KXy'2"jT- z{c%Z **iEy_`C(GX)RKJ(.V$cNy7zyME{(a aY$Bvۃڈ  G:#.5m i3YB;5GZSU| =!ZLc WdG$J|0{W,gDig8#F[aOT4p{n`:ޫ)Sό/|j0+qEUഉêlA2&e#{6OXAoW a\9}?>ZuE8fqm!藑_rJyaC_Oͳ7kDo g5lZbw-~Tp6iGVS#'wvkX-}w/R47$pL(FX"D8s1Gr Aow90$<~g1C4[,'IfyfSwJqxw"Kr;ҧU_Uߡxyْ1G-[UV2|Vv[)4Ic#8}g烃<;) jnM=ޢ@)ss;u O o)+0M.ZO -*8y0 M1Ue Yʸz0,.G>#MᕛpRSWU(0R2y:PlB?֗7ir%׸ח8K K ]L3V?cA3VT HjcJ6Cj['-*^|B-HET 0V'm@'9uybIk֢!EcUm?o &_W%o6g|cWuM|W7t +h:V}:T[ZIc$[gLE*u'.']([4ʊ2Љ4)i p0)8ɡ]p~39va^':_018*E p"{u ՎڮZYQBY_QI~b Pr~Vпx{^Q"uq`=א19 u42TU8޸S&u$PL*I5\i8O4 Xۆ_eN1*!RPKNE TKWG2ԙ- ~ -p]gAK=1{5Qp]k2A4r64YGp1~0}CpМY xϓ P/5Vz)*}Mhɏ䚴0SiȳPyeF}x@%񤞽Le7B4jWvVLkf˕&jqvJX xV ㌆ϏĢ* X@ة9ge4)gf{JD,WLu۲$YeQwCg!IGs+% #UʞRPI48㥯$V4+KKD~f P{41fŅ}|dqV@0Cp\=g(4DQR3PNU(h7I!.$|;IpْW& ?+K D 3T\W~.z}AԼ>Chx._i'w~`&߾H?ۑco&6N!.ҎF6@oA,}G8gڏNa@?aH3/; A ϛw/~li| Iu#O_?Y͗'-FXyg ~\߀HC ͆xb9}qNаgϮ5Y/7?6iJlLUw,h$f圼/){yү}F9KRD( d)Y`L+*lާ!?o|x6 :BY0eZ(&Ll5wNtG4lԟ"&2!$XrRC> ܜ}w),oy)H>S.UZAϖ>i|%(/׫j?Z_T7˱%Z_o64PݼɃgҫ7o_//_-ǟ˦WvT&-嫦;neR{ެv5o󋍩~ }at`M>@:׶Y'tpݨN ]p;!{W`/_zŸyI{ʒC}vW2 FdTCiN0LEҮ}µaY:^/"ԧ=q%P'CAsIb(=-Enywt6J- bUv*4ʾ?c `=S_ս};QebѱndbGV&hV A[6r-лA;[[ Hg{KvÕ|!BP2SeJP[qy zB04b. ||"j貉RϮ$oÝUxMdsWӡ{M)u>͖402TsXGGn[ O 8NW 0Av>K\cĝx .^HU7i>u$] [?"(?QW.MH["ML"i}ْeRjT.=@am.YU2Dc,L 'HN6TV"~.֠hV=.Ȃd ?DMnٖqGR!ڑ!ޑF߇EBk0w rE`FŹlv+$IUIVcܛ,oSyہ}0{%hFq0E*C(\EXQ(%WvA]-E׹1XqK.bȈ/g j[sDP@#R4gf&z%(ڂC >J^_ u2`ld J2p687hoLKt+p0eCFRZO#S2-tB[vIY'd@N9YGNV4gifb]E~W k+,uE& (ό,bʚ)"|jUB9EJk%.vY賱d EcƖA 6OE2{J<ܲ2!{V2jJ -=j'1 XȮXs;^OUw,޵4+r+Tc&gխRxӒǯO$JfLF}Nn;8~iLxJ +j^Qn0BpE]6%#}f`Lq3R`X֕9䏫h!=a|x%|˸EDg5QaF_ r:m3{z8XdF0,vA3//NREK5ߠi(FrԹAg1+A2->"xVNx<(330 HlDb )\aBe&,V䙐 ̼8P1<=jk,3½1$]M ]oFv_`]'HOEdS6xpW1>מi}AӇOXbFgwb_Ͷ\W ÙO Fq4#" 8GūN8FMo6[Zނ}Ys:v8*UQtp O4UeOl̼AOjωoq6&ĩt5QwM̏O3vO?n#=ncF^cFhd"D &Sfz;|R/+QUL̴S9 3$q04"o/c1N"r-oЁulЗ;nD_S=&k\ kI<~)A6E1m31#>7b~7n\={ЛGvsĪ k,'}ՀFig$3 cp WQ`}'=z&?.mOӯ@㳖F9P1yeT'e\)M”,"A8JV9kDI>L߀gygzOz7ԫmp=YO^PvH9NFb 5\+wX>W$Rڼ^ﬗﬗﬗﬗﬗﬗwj"Ve &%4+ *J֛b)cFϽZpD~0D$H ~Z!y0LiD$?¡ijx:ToC/>!y*+p#z5oKS'Um HJ4L:(?= |?L\{/:,D2bx9? 6<7XGFb5x2X/`T2h |j*ujs*3K&ؘ |#U,q+Ȅᗯ)6D%]D]?r/swp.֊~q*@ )LRhHRIJ2`TdBFgX\9凞^kEʖ Mɵ݀l WZS2!r\˦w+5%pM^=ڶ~-%Ջi H7-[6UU5Hۥ*16fv KP񘣔/1@ мݕq{Q}oݯgW2/r>A4ke`q; #e'k#^9(V ]#fsUa^FWi'VyOP):\F(_+(L.?{fsIJg Kjy#EpFR!|5pgM49+ 憑4lTO3EY,/֪^}x[҇9B{R8'Ź,!fV\߆ԗ9ݛ.Z 342A77:ՑԻL.3zޚylA}\_!J)PVP[8g<3L"e"dr.{:xEd __@M0Q[c f1i_{Bd(uczw(̵O,:P N4Wz; Us>=jzUogთWY1y7u7ߜƯ/z )XT#)f?!חxSN_+r:@gK\/*X,$Qh$lK6zWK¸erY#cgv!LHB>Ҝ\tRNzqLLOR[; V;\EҀ0CyqqThp EY Xسމvuvjm`KEp%[#IUGRbR)&;hP.ۅ'@K]Q#-*g.R8륍AxT(Q ^,vFqVVP+nWoisҲC^QL?04S/@~7ovfvE%n U C4dːX[M E"RJX#,|Jûh86yB1}x orȚxjIOAZYӘv0?0f.ǬB%'m% L&  ,'81ShEQR5̆x/ '0ћb|͖ȹ<$)䂌\ҒcA/)XoO*{K)J*_88"{<1o﾿hBε:$MX_}(/G\Ǿ¨~7I}ww.O/ y&A%3ʊ V\4qY&bZ9k}˄Z5s6qwuXWsf+:T)Ƕ,c6_ ۹t F-A˗CT0;F6:dg4R2͞RsQ>$` Lk'njyLńʑK <%ўP>ўl+72tSH}mr1vflQhkZQWܯ?W k*kCE p:ʒ+}GP V` Z e8ݾ?.~+3QbBFg4z&$y }e16k'Ƅq 2(7r0hg Vg z'^ R ˖4VP^d2:++ƷGwVnO|`m6 q<"8&j/;{c8[{vEV&?3.%X_?UWF1sAk'ﬔQl5~2-' r},~"U<|g($`R8w'^Cr jAyn E}5:˦m3]o u4ؚG&1_G`N3n8 4I I,h"ZJ VFetVNr] |w"Cy;+DAҠ~nJ@U]Y Œ9󘝿(}j§Q;+zGrR:AܒKT>Rqʖ(D ,T=vu,;j")8!rDİ+N謜AtG_1$EC15ZB zS,elXw5ÿUjhؙ\K cVܲnĻ%{Y7 b7 T-mȄacܸUj$'ls5F `şsb1;y٤u_sQXu@`G{|'㑬jаx}jeAVAZ_LIOtP(qJv] %(8 b(>J6`Z։R\j *'fDe4kPzt?duS1CGi,E1 )98rCӦHp b *rH`KN)~A͌\,X VBs\*qe-Pj}ڍ;quq݀b虐Y9`WcF:!r"7(qo%;3e0Mwmn1]3hyPvBF_P="1l9v|Wt9^攀1 wӒz5Dm#aoRcZ820R݃u1~Z9M?eHS*s' 0[s+:gQ!wn׾=w(_ַ|(xK{?6d~R>;̧Ľ?Կ}ul]'vj򩮚5ݵR 9A|fsA7YW~_$TφԔGrT؆xaTwtƳUGGvȌ/G3ErC%@nAC5TiyiY;g399V>VvGy> #Ϫ8xVԧvO/~]mZ~>YBGJX?= GN$L"r<ϟKJxnяgy,]_qUI)繝_SeG0MWưHKpvvxY m37uӾy7:$C1cLe:!a_&^޺`1[Ư UF0E-Y&ׅn(L ]f2_~k|N& ZIGX\Ɗ %OC:>chkD!Ms~po6֜Z* NE0B81x O' }UTIY+~Ƽ~jExПgg0tSoӨ525o׋xp;ժb+$ôP}:u5rو| 1ᨪ:?-Tw軱X\~q]낶^Y' ') Y)+[Zj33݃mܼ%-?r VVD&I3L.A wVb֌OǫQ[v7Ύgz__nIxsבMLÿ?C.M~^j К| p8[6ЀY 'x;T0cy|.*i}ѪE%T.HToa|>mN UG8;t?Y_dBlv5jѓ5a |1&T`X:^^WLX>\aq" fbl2@̞0=a愓#j7qyE!c|{9&"dĘfIev,M\\ Xt+{1\;o3."~m“ߪ-݆o9f۽8tn =G%xR .zB$lI78A S6U|yOq2r=?;iLlP)Ĩ5|_Zs)h1%[7n5eq^uM6 *{ l.s*,_wq׀qn,f֛r:}۫Nm$_H_ߥ+TqJ%& j$#7["e3*DĻd՘hcL,gBS80`8sxr\(C¨_ (CR08ºR94h Iya,sw08{Б'4GEI 3Pt1nCՅj ':Y-:4Up -y%`MJ挿&Έ^z\^/\:F:d8uQH>` pdV(D2,V5( cHy=V}ZAEoWeGD#)GwOwfI{zj5'ڲ5mmvuer ;W봡Z$y?)bSVD2V|$6 6uyz%8y Ǒ8kքx]%8o$ցe,ipEjv{nһƱxlW֎}^(؂r|c֓0?.rt.r/8@PM`4Tf> + D}SeǔI1Hl!!)xʹ: ]կ>h+ `YI(>ϊkKc+Z[ƭ +x覱=-@HƧ|wW $8iisFːc"I0U-XB ={+O)xb҇UL8]!&(ŨP\0*6 HNM6Z;u/޹!薪̀jAk@Vhc#zRA^ڐ'E%'BfX/s|0U<ԢDۀfW N.פsVͤjlj}<Z˦Vy.n'&YuQnЛQs_S[M['7`K0]-jھ*^y]uڿXK o~`ףܹG0_&٪Ϫ{&yŠM{)#x-o<ߍq?^pA(j2߳. V9VêɘXBHS1D([ABLn徬 ˺żYBb %gZb(8FA.G2N+>)`*>=PypqV~ %Wd-1D$x %ׄ!ql{@3u#w VfD$Y<47i$Rh P(! N, !{c>8k;@%(9J4[q8>6N:YuNFy&99^ù bJ뒛|-j{GÁD 3\ȼ+ P(8Q8E< ܢHJ&ca~u[y Y Y4 9]2*PyfLuN*"SEVx9C(`bϰvGL |Yj a]`Ķ[ûBB)])z@b?$EX(`^IiJ3? B(`^]zwHJPyS~\J&A`\JƄ4B6HIJ.B!Y&zL %BcB 181刭fsBN<F~ %){@q\s*f- rA=#=~KpK3? Z(aިҳ4=PyegI %7)B& o8CGy=~"̯M.qӭ9WDEKG.KD.(x%{cc㉒})d{SluHDP ǭɳ#jN<>HXD/-Tg-=!>;8d̈́QUT썤{?/rҍQ=6R6El5Sp)$4eOzvqbWsƂŤI&$1ͭr&W f!PV,Nd01p *S&N Q#$L IhF]Ǩ4Ȱ5k@=kwfN>TxXr$LG0`5G!ԀI" 45`'Li;@I8^9QubIHY"&-1ps`F\TK%)xZŠm#܏WWD;LaP ω_ s@r `9g"Gq2Z1 O;LE&&{N[.o:/ 2qP:t$}W gϦ-ċZxP}1&ڱ0*` IN!<3dX{a-m1(n*-|k6Dr4 ?6h8)& V6qs}D.DO7O NB&L7LXό'!\Ng'2$9bO1Zks J*!F}\'g+;;F݋gyxG H&tn6-8RY0T$Jp"-F,$r7BK3/fZ$q #4=Z]o#7Wr vhQ|f{&3ɞGWl=ܶ%mYƱ-EVU,Vs%l>- V'>r $I8}$m34H6TRB p&Qg+J*9JAy0}ZB|v3""u8Ȭ VHHKN/괂QI 2@-[>O2qe_I/-IAQ)e"", &O,xrfD4`)SRSz.AvEYPMn|i<%4yX fyNn<ߕif4uH |ri< /6)O/VY<))* Ff#g}Z5u21 *&UN*6}ZA<S;[I1k=AJq5FEP+%DB K[1RXrpub#0s+H>-T!^ڑj 6mJx-a)X-T 2Z ptOiv<")\LH}qTw?}uL^&e=g=]AɄ0:cNIɲH02i0I%1&..0( lS^\T\AZDVPc{uqj#1 .:[vt,(̋`!2ɿ]}Vu<[wښXm֓N RR3I Tꩤ3E0bN J%|D@w'QU$*U4))PR(h`D)0΃6S0D Dc@ Bf0[+zփe!2cJP-G+6#]U\m8o/i .Ƹ(]oD\G$uACJ`8ȉ xQK8CO.M僐[ GYQU K^O EFmYF}ӿp62HH Z=LC1LL鶁-L)p\'>1-9gN՜n1i;eY  >l2jǧp([Fd\fpLi ˠ-?֓x[=nPrӄ2 ]H_f,S-p\K 4Oҕ[ 8ER rL)@G]N0Xpv,S"k.6 'I7vWhގs?5[xV"À=AֱX)!8ρ##"a CÕ]p(d8|X @D!&sxb2LEl6s%qSR%Li{h`)Dr^$D>o)kC ҂ vֶZٜ]Ji#eqkvlfTʦ W4X!d YLQ qZO#9r.IR6RCސɭ&d~=z$xDU"D][;&^ 8M)(-8Ly/Eb9\pe)CFN%;ӔgBZn&%mpN:+*YV~1[èxW),=Sâ)!"jRA(S%X:p5zi#]| 5Eþ\Q_:a),S"J5Li+} ,xf U^ZrMԪ.? vҗoOtt>|H,5#\hށ`G"E.e e/6?~PU4]^]l6:h{qGWݗ~anZt]mSHJ !d"'!TR#1MRDˌL3+JMP4 ̊$rz+WE+|-r.&,g29F$eO˥Fb@c H_|!QY:5# +BW븍%Pd0ĂAU󨯽mH;C~ĺfZ*Du# ºiҗ6S<Ƙq4yHWmmF{kɹh2EP{!Mqޓ}gN^!_z-//}?8 azM>&jɇCmP|M>HP|<&jɇ䃬 ɇZ󥷮y@J&'aM̻`>p강i mb3yln$:h|Uj+7wG}R/n9!p0߿-r2T?Zݟl=oCO{YPK; _Il3\7tT?Mgc[fCѬh't:*`&ŭ$3>G"+bKΝٝ {e[>|*&,vЧl ̷1dE9Ľ,"8:kwŔvJ?i޿g fJR|f ![ܰ鷇we7;n:"~cb'u|ߐ'<$L{ 86{ ]+Vr-H2PiWɴ%am-^y>zv^\϶!&!Oክ|(v1W?P2d {W^Fz=(Pb?|=s`W6نك nr5TSλk4cfJ [̤Cu9_0#-A[=7@n5 oCnZ Op1kB}jXh?BB'%墟4dBSb٨R\I]s$v%2#Amh2kg6FNG-~Mʄs!4E MqTi;[l.=%`)M VS7lyQ?Yǭٕ4:_/"IV$3Ҭ&`USc*-\^5o` >:NX\ĵkv' Ǘ$pr{GY- N}%Bƒ ^BU™YM8n 7)}̀Յ4%&?k1ʫ+n%P~,m 5 D2 u7TLSd:a+?¶ƌ Tb}gۧ6v_%(xh@y)*}o%Q10T9v0KJnv| A5|!K}f|)J^.tgJLclp6B MD’،f.3f;6>kt0YοLg|LZt]cr)jxS&*NVJiI݉?k ?ӧ=Bo@j~5 6j+)'< m`j{䣵4隟tnFMzRFIm9PQ$寳4_Mgh\p3CCT\Kf2g4$eñT\ʠ %ֵ[ HK(}ǟ[Mnp[(Wlڏ 'z-aC􉐭E?%U+|U3etqC-K1D?w0'E[yGmȰϟyYOٔm59qbBm\8|T&06goCmQoD.m$ɿ"ha rIvf 06ik"K);q߯zKDd9 wuuUxKC}UjjdO8(U 5+Nۀz֊F?jo:ch_T@߅Fw^1  V5mLHy0 733n&[gZn)ͫ5Wx?KlևhH92[bnwxBބAbɔ-ĕnF-!+4y!'nzy#!b-]RM}ș7OQkF\D?N7,ɽi} |vHRU{o ъ݈rŸiyk\: z $5*4ZrZ%s#ɂ8źL%u.-vcCn`8۵C1ѓr7e|M' dTCtǞ2*-d"]JEvW)ƷZkդ=j}I> {cjB*ΩUp;oB!gh՞<#,RFXp4evzS (Q]lK]ݖkKEEmr0[)ɶ-^^U=H۹88pƑ\JM.KyXaJNqܶި^WD؟?RaayIpztSO"xg*ךsudKoLK.HZXȩZɪjխ٩<[*VydRĤ/$eR47KQ XgF!p5QqdNRFy+;jV/ zU1־*"θ!T<UX-zg՘IDk1h L!-6.Zqz~FX.,E{VKY%p\>^Mq&kl1f2''vM\pTib[:yVȍ i M26FsvoOog7Ϸ :{gs^nb@x37 v<y#/R@xӧ5סA?1O[𽙱2Ϙ.ӚFĀZ{3% SžcJ ڑ$X$m =*2قn$czE*Iˈh 4rMaTkkL[HZ鄒1&o*P j4!13ˌ3Lz`p;Ix fVxo|;?C|,RN]~FKv]:0Wg'>n2C8=>)e@I)PĹiAs@~b<~C"O|B HG:BĹqn68g$ "@4&6]u#((&Zad vA /'>0cIW@]sДqieIaHiB)@&9yS=[|hMV2Zt+RR #Je`y3j}:}^2n LU &3dԺ #]{l-AkbB{RK95/.o̰&F?P)+n@/? 0[Bǂ N(b* 1ifRſ-?pp::J 3t9Y@$m:ۑZH+wv)BB/,q,d*~qHXhvrBnNP^ϐg.cL33g s<|IÄEu^cRދ$=r^ޡrMq,,zu-nYh:p>jQvq6`ze֓e }:Z.>_1Z:byGp[Km9}2N gvx3s7hk)9^=籾9^*ػ[*IX2ͱDBh264_F#JA/cUT_#Uu1R:]:J~C׫]>w^e 6ʟqJrc:QãUEU՝Xy,;p,x~5˙z/ܼTPk)2K  IM.Vܬô -ZߚmL?<`~?_v}RQUsڋxg}{vymgy07“fd)9ĔZXo2˂PXrJ8Jމxӽ-)ٿl}n.J?o+b*mP f m-My@RPiDH)@ ŚSkalQn\PZ/fK5v$kl9!<""ǐFI40*Lq"kD2Bd\Ykq~5+{J%;c1/cg6(b9,7Hkq4qnsT!b#`O ;N) 6R+F{jIbٴưDE ֎EeRmEu1|;1XAi_*MsCZTcmAƇ#.H*:cAԊ q V[ARb9$'^jF]-pj6$s&F~ؚ;VeiDsqUATRQ¼ZZRR*C+ yk<>s` dw@ԩy8%yHFǤK5 3âҌ;$8LoZ:CSmM+fxQ&+v1;0te1tNg͖u]wScn;K뙼jYE񓟋Fד,%h ۏx_Bb6BU3QY9S:,jR=ҕ,`XB>]Lo=L7:ٽ6-NY |&GmacWEߥzrLKON3c"CJ ҊeDICFwRN^ξx{4HUoH2X3T耸AyLP0RigB@YcN2< m 𔳙'TY$9՘>Hi' vu#c{!Lpz}~B0EF3fXZXbiH]/ևۋҼG' ٘R3fEyU'$٥Gg֬Sri=e_fYASpڰ4qCqTQLyQjE'ȥ`Z jo"R Le 1KZEoס3) nC'-ס{I}qCa]w:߆M.$|wUW~?_J_%*=7 hɵZx0}цTr6.: 6GT>\q]%B^ڽK5{~]|yD~{ J6j}KmST{K{@ h #\ z R.Di>?&^T|R&L3X2%,FDTHCO4=e7? "Z 4/\)6`& #280LK/*zgݞE !IeHl=Zb֚)8dG(7{,{ƭ~ٗ3CUl>&SgaF#$=}@II"9p5ݧi<5|c#Yp esH8Zipu$>dZ*b泟нEпh;BP_J`(5q:)0q4{`vK+J*#- LPwHբ#ųJD LhkA-A%{t2wб~1ŀXО\E殢Fb/b=]+! S6?_5_ B;! fnƺ mmK~gc|e3ͼ"(ED6<|!̋gcby&SfTY=|xZwC{ЍHF뗃^c e, %/.,axqY^FhG9ob *Mu7E' <l_} S'awmL2J)@hicF&zGCcjFHo]~544nSMvւ4ik}i6%q=kJvͶZ׼Rl]> Ѷd#(h(I tFE@2N\1N(ݙ]fYU~=OͶ1 3 L!2 ,%& 1ҎҞN/GN.+%T8 y#)RR%nbi1g;Ja+rWdI`?ٵ ~xvk,o aoCWT;ȧ?x[־.tȅ{6Ssb'yQj(Q[5~_[aqjiZVg#*~kTf8]S|.>BΖ.h`k.PHx3jg>OVb^1Tk%iNp| .4 uvyۼ,[2_ʶ[<[ٶkTIn0^=0ו~}H3o3A#Θ5*ӆ N4{c.n6:imJ"N9)b<O ZŚ L,-yAd tP[4{/zkL9#}hcg-́d"iY"MY%iBƹ`1X6>ě̹d}nNpj&IYո[OZ!Q6q[˛>P#`IdrlBUtH̭oV2m$JZtFhuLc-jML= f*9beKpk:kN$mHU5&CQ 5z[P%hɕ-UC¨m@tFjݖ&jIWjWMxMLv#U!5N`?NBdIc%O )p?|} o(U0g{?34A.oO7_ާ/͗/n:d8MS)@Xxf1EDUsoup@9W{oПo ;3s?u7ET~K/B`ѧ9CˤyR؟լ_İ\,TԷof'C0!B* S띱Vc&ye4zlM5BZ"΃&gxP:KAp47kLJٺFr `3Y-gN$̩7-L._60͊j>`-o4A3 E/3+bpR `hK!G8ztntv΍cVԕᘵԻ^gZoY 96($ 牔2\$0!NXa-XzW;Epnpá’a䐤X /Ai4vCP q)&8 uyr;ZS M~$`+.%{veerpjr]GTmx1J.Yyl5qY;Aje]B$#D0'8pt!0@(w4r'}H#dQF!l#/* ap)1 XݧPv9foW7o.;SJc ʂv汍nnX̴Q*c e3]#ԕ͎wgxJ}ڝ!a[)}p$UI'X'j E[Le.ڨr#6j㬰#segq>fJR9 rB7˻H2_>jȮ\ƷV9GW: uE?SVM"-3PoxҺ)a{:K&!z:<NZ7 KYpf!Wߏq +?yڠU6n乾=U{˲=8ӁZUYOkUOQY2c=cySF"0{)%2끝M{јTWV= ;6屪 .AԳsme6Q#{TRkg>{zۖyc_2Nm2;Inuƞg_sel~xptXDVpf䖑2%cB#{3؅[ |P_ʉqҧΕ(XYr#粳䵄[HL3*cRbEoAs?$&ɯL&<Չ&!)4"$PjCz-֜ZCc֎[MLƒE*L<SDcK)b΂uDo[[dBwk>FKղ%׋n|o+xUfsF#+`+?ci2 fS\my+LP&ǹ&+{ kV v[:w (_u'-dCˬ>m2>Qc V[=A*ϧQk& :ƨ\Qb:̽,, jց !d%Sn=A$tBI E+T4hR"I`ęeFHL5Z~S.s* [) ea5CK_NsFiVe@I)TO 9-sqC샧C"O|B \XGQ8.\|vP@  Zm@cn傠\3# k951o JB!Xj$hQb!r@(2g-!|YN5e̜IBqWI>{p (JcD0\{|m!.+i*aYj6:$bVВ՚޵q$2X8\FC@p9MM'X`o pH?z%qHJbKLwu=>'O:.~;|-hpgtE D25A i%*(.2)s58zQ LgWT00 3ڀ2ksWX/7.w#*l[$\F+ :G ADόD3+BLPJ,Ljm6%.t\7y𱞅oY{E z ^S5I,1$RD;H Q?XgYfh|B#(-hÙ@Ej VimchY&d#Ƽl v3iLkS0i74/5V`έπCAe-S bpyo hqF6`;+|""P k=՚Sh#ex%6>FƄ&+תQ_ykO"a<<3.$e&hB K$2G4ЙVRاArOc7@s-5ORd)sF"wZJqqI abdsIwSl"L]x#_+?Ϫ´*~~o5aTpCle %4oo 7ZĴ]'TqMGV~6y$LW/t!C99׊sÉk' m(N g`=uʨԁh+HAx(.s+m"Ew{\]VwEgg˫MjԈ=Zvރ[>δ9aL KVӚ3h>"$K5/DV7泸ߞCke~wWl`ls> Fi88ӕkx,++da0V&;ߔKlzekXk|:A0 )x4$ts`mrA.'֮Cäb$GƩ*㔞jT-U6/WapugTE^7߽ǏoQf޽WZ?#m!!(!G Ϋ_y_-_͍ժY/{MSxQD^+֙mOOߍfk| k G˯Q/[O9#Q,ZըelUaoF+=ҍ omܭM$G+$g;I{au۠sZr)%I0XX,3qS dAxCq:ڰyR#<_˾ $)sx&`*3xʦN/:+/BsnM:g-mU%8KR'5K_Z+.@V>`5:j&[< ^ )9/g._= Zi'o`UPC,)kgڇJK"%P@ "؞FL_Cҙq#ѵЖ8F"{E㨣FW~5uM>F]ټ ;]s:['@ZenB2፺0a6iMQ]] iNoÔɇAM]iy }HQ6m&T)9R+;VxBXw:&udUD=\ S":1`$VKQc`p`/y@z;Ǎ4ᣱ&A ?omTDɨh:sUd!jaoV-CU/$fF[0)-8V|wHu b:yr#<~7d<Qzp;Bb,:a R]֚w8fC|:vqܰ=cv5PoGILG%jɡZec_rѲ15Iu$P]rk]/ 9ϛe)>+{Rz-ت4>w49b zAyo@Y"1Z(OV^iTo6[4JOUwr]]u{ɔD[FT!&@&gIOXG֌Kn z^F51>4c/}M>ov}iNO[J\#v<){f K9ҫN1&%)Cق`:&YtMJi*(%^dK1EՈܦ{yHn⪍tcMXMl& A3-:`6<|ᣋpgY/p^:!Z$&&,RD6}[a~A.x#@Ɣ Fa@bLPhkQcD]/yFwB,EƊ}A2f %Di&)ZGj&Lh4 e"iE)i/Z Iz'qZ[ӫz!/ۆ鱻7xKFÑ2:Mv6 qǟV 5(T?Ü-Tj<,g!NhY!;{%-"5YU[S͈.%;˨aVz%-WpQ*\pJ[84>N^Xuc[qd#{'d(wsP떹ϣy j[ mL9[ӄ(:-[GU2\-o-MbP˫2ɨ zlTTxӓd5ԫd{AG)L$]5/R}FZk!Ѝ=a.(сH4n׬5#Oϯ0c4A4e@eԉ}@S &\)ZG: D{Z !*Ky% pփ˘f ,KSS#gCԡ0^{ O7͜)xk A灼Tvd7ezrf`! |H#D-F'BM($f#a>0 MEr{eߑ/OA&@0stR1TV8Ty/ EOdcrHr9SPddF$e+T IFh68q,g gO9k/@ZFocܰ dipIzTF#BDԤ2:QJ "AH1%f9+KQE_ /zsWI(%$k0s)\`EaBjh jt_?)nsMP4!hwG4L" LJ)Pj?%(^G#FM';dӾfv켂CkGAx3-Ps.\Z ˔ ADόD3+BLPJ,Ljm6/..t\7y𱞅-o$&֋I,矢70%8arĐHCE$%/(GX:G8{5AgM6 -u{KI:WYXۘH<'ɔwU[㮧K&/_E""&C҉A8T99Y#+m#IR~ut ,ƽiyJlS̢,odEHZ*T,[dQq|l0N^ΕB.f'94+r= TL=(4>jr4B輭xg݂Ϗ)<ݠpTnI]-E6Xq Or1A.^CH/:~@*ȭ(yŹ,T \ĵ"Іk(AA6dF3 BRB РV <}< ;+>2 %w.JI}$ebc|ENwkW+Eھ6DߟOϗ?"JQ%R&,hY@8`bYecnFuV`=ֽib_P?O~|3}ar_<_\N*=]Dhͤ>8YoAA~6$tH ׏t4 oa# lLQŊ91WɦY8*G/iԦCRsi.e`Ѹ>!zS(8gg7;jC yǽпDv O?~Ϗ?~ĸ>|hqP7MAgOϛ} Ӈ_wZC  \໌k\røGQykg !~za_rU| )N=~\E]c/οF1ZTQhgYYYX/1B4cz>^?}wI|-lN7$(A42F3PG*Y-7 S*xC$ta\#{$/':^|L(tTȸSsh GJۜ_yq$)[;'蒵WuK_RI3=)U .a0-d1E-7Fp<'4OE$]:\eӽ(t߻1^h'~fr6.TH'_]yB*d˓0ξ>975[o;w Tפ"s2d2Eǂ$ H0;,';:_ IǁM݉&߳%NSA@*Ť:rL|] V-)K%ɚz Y'#98+P#8ڞ;tqyFߌJR-d] ׻* 1C>gv8s+G+QMfPWЍ~{KH%1j즣yU7e0e0 ?}#Ewx{49)P )EL$VVg&rd.kWxSk0Jt!Ǔ}r|ᧃ m%c \Wk!l_,$Қ )AO赦&Ǎf1Nk8>5kݜ&ovR׎oto[ShG՜Aw߷**s{jC71 QR0H` J]cYZ8 p kZvIZ'i=E Yﬡ,:BtU}Fp4Mto}CwGCt(ZC.\T }6 21:%˼a޾6omv Iv;r 2]\mr,G/42zU1}G$:i>gA78ӫ6hܯOHΘ)xXT<[/8>fIurQ Ӑ 1NF9 F(J+8Jg3 )B+'<dU@; [aVniբ-6wSyFذmzdUury{ƶCZAZ+][if7fG4 \%srI 2@-[,tUnju򀫝%r $iM BVP cbIx!&霙,KZEjk=Wϯ4ۤ8RbCYjfyN'8eF&2`!Ys\<[r¸OF/M-r*D.I'<(o3u,KtffV+ =ө 4@ܫ`]TNb, S2JPG#]5:ŕ7U)av=Ke_hVwiԚa(4 @xc2iL\Z`GmYFhYVZΎai"#Sڤd9ڀ4I'%ǔu팰+ƷH M|C2WM[d4sTDf hݒp]\>Ejٹ.ȴVnf R q2TSDh _UN$L ԜG8rFX떵]6yNJA}3'1ׄJoC0 (\^UD%kI7d:c'JHPA&Hybrœ{MyNtʞ i@J > A[J19rQŅubTw,YvC P\Qs=) Ejf|04O-i n]mxqm5oWO;jwיe i͋{(L-7E XY1%f> N*5M"*0DUwBF9k@\C(B; YR,:_!&nMD 30g|&ɔciC܆$:[j"oz/-&!8<;%mN}\r4$ODWHs(,g6iCc*ym( #8\:`R3W%4{sDO3"0\au?,Fzk=sNW|S`;<4iz)|9~BRΩ9 ULDRs LZ%` K #)< ͚(ETnVvq:L>sἔ=.Ͻ 3nlp2ȸN|3+PBщĬ\@` wJ`Onsx)[wX}fG2)U٫qSת*{^תii GVeVeUkUZVeUkUH@4*{>ܪV;¢∪ډcH{~Z`O-R+׊"{^+׊GPx 8z/6׆a-T_jD-T 4M@S-Tє焚XQ%G5ݱ;tǚXkcMw57k ڭv:kڭv=kڭ"v:kڭbv:kڭ݁ v f`-uOF&*6$iS#sJbuOq=i,߅/Dd6 2LTi@&reUDsZ`0ƨQgtXVgT+ǴIow'zWT bJiA#G'xILEOBUJZSwWщjG說ϲ&e nBY]⪭kQ-aַÈZI)Ur`(3XQ-0n-2LdqY6͘aev\cS/E|-Ot D,v<(zJDBohiD4I90s SJV!r )DžЀg Cc2}Z\t념 P !FLe/t\[|) Qd`0m8Kbu$ GO@@-UA-:Hx=k~,^GW>_&>ֹWhyJ3?jt֭BnGH:-y0O5\tI8Zn{n4I{KdH胖yd0qrNΌxG#'1 fzljhe1 aTȌ|>?L(2|'L=O3ԝ~į+ X4͛՝:ʍL+nt<-Ρͮ}ﱀ`R}ktCz^QH*yٽHNf|>ûEbb0A|2낓7z3nw7c4QNVbmX6 7le+h,-~9|Mx9m!uݳ[Jm9Xu,a28'w9)>_T մ&5X89܋sZ!Qw?sۏǟ}If`]7w?wanwnk#5ovkia[.z:vk{>[A8W-|hR` [Iq"fa/ZK%QmyT4|T""Ċe#f4#A|?on2/ 6 L2hZI8ggJd'FR MFs{f2nfMPE=(2^ @*q.pYnts\]WQ>Usa2~I0}Ήq>j9 꼃(dt# xhXAGFLvڡ#8mfс9ˋ)8ZƇnFDH躾3dÁAVm"zrDZPjoD/ِG3JDhW.:e;~`o]hKO/.M8|u@DlVM6,4$eMIc)Е9g=ge=θO>*<T mf@O'&87W>IGnb$r!5 kBֱ̞.UYYCA,Yt!Nv]XtzMb(Q?}Ԟ%.}hEȘ2gk]ōE֚|sr!2,w)!EKL2's([N]N=:Bhf\ .nD5s>}/2ʒb,yDsgeΎiF %:EV,C"3NZ>²厼ЋipcTZ2NC6Ǝ-ΆE 6Wףj$:yb+tffp| 6\SDaB_d ZStpV r4[ګ/KT*Og^[S!05&mχƌ4eONh`pe έ@#,d'dB"6QCАpd̺,!8#fB,UT-6>8&eʁHxhL Yt-̇Ub89@=XX+XNFAE ɀH!h5=HtVp"Dck\iiiDڋF@QDKdM'c2xъZ{͋I{5@Ɓ΃-.?bk :b,lN&MvI67d`~1#{_tW~q .Ί@\[kavC~,z|zC"-Q2M^fC䣈[] l a7=aRSnv׫6vhZk>+O})gc8> *n*n7@:3K< XKHJRh$pLQ(MQ Ϭݦ%wyD<,oz.j-͗)`X?":Loᡆ]usTw^=]&nfZ!m 4}`pL]c[\[-l,Dr>a;Rb@N}ݕ}[nSCtBt @C%q|wF|-XK5ɨ1!T3 *j*1>yPYW%rtl+ù„TGc\a\s]s/n^sRhk?^zMKR{i<3n~=kk_W:z{Eh$.!կʬeނpq맹i9L+`OIEn@oMknl8}NV<;5,_\V// 0q|槶_}{RDkۋ[颼. ݡ8i>Wi?|Q~/g﷈J m4_^]1fh1Y13fױ3RBGf a𻾄| ٿ{Wr$z{^h*+Г,w]_zp2|yVmo6{*mO;Djw]ݏO?UJ;?v 6yl -W'}\HDѼñg^oalgŸ>b |1 oѭğN]]Pѯ] j:1tbd G& ΠikߞF=I ݡ+qqp6Y^3֠j=D ŔCyHg8K;'2^7&+ a?:j{&nu5] D:a\SU FHE1-ʛjC Ç8HwVS6矯4Jy3GѾM6;Sї .)vpdg^3j\& rζ3*~}*𞒈ЮhסM/=]octmcxbjWh[Y}7RRE5LKŵLf;h!*dGф<$)YL&6ĘER4_\U˲ ?Eq7\ 9]kѪݎpʌvݯEu{>5۶fP,ժuQHcf3rۤLJ.\Xc%*fp̪h5~mTbb{ִ@DsJ|zKd-Z(1sk7HZ 4JHDȈ&cU#0\8SAGcTir \щspqoA"{@8 iD{۰xS.B*W/9B"i+pd#<"Kq1*Li@>6)1yso*qJʈR(j͑XA)Ig6;VA O{@8Zṗc<:R ` Y %$`Y* ~!>.JEho01[YV`1rԺdJ)mp< `lB:kf8 =0h 2P4-%D:2D{(\P]ru# 2]0@2^!S Vو@X/`t[ XkƩ琦 ngdfN 噴UAŒ5XbUhrkJ!Z$rCAJ}a<*#R4eA gd,q,> vy9<+fP57.WR̨Z!0c VxCpۭڛh9TX] ("F_b =ND@6DK z1΁6Db6y-e}trQV]k@)@d2"K4ؚ5'iQ[Y B5*)8RY x!mJUΪrneV oT[@ 3. fM:[PX1sɺ6a z:贜N𥉉6r:m5M-ۣ$2h ꦫ2l&xѹq0O,E2^Uf I{y.j 灣d`9@q=ӳ 푾4 |@aPB^o*(9Q͡xy%\CD[9DKB:R ; #B*|DHz;`7P}V&u&id z5_uEr\N.A!0Fuل(V2 -TM%h RŢ4tT#>j z ׁ ~ x̍ڰ(RVGϊSZ]FʄfjYL>9XZ2)I1OH2ˇ[H:? E_m{ڍ7FPٗ9pGVq$k @(ǔpHJfH z΀zQreFpJ$nÈ5>T2QM_j5fC1IXU?]|Xv)&T,Ҭ"H"r é K]p9@BF r^BרvwE"B Tv"#DoaL0@]/j n-n/OD`'fbkW; -ot5g ELV|RhVfč_n1l7"ar0h@ jhyNrr=F'""'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r=^'mǷK@ihK6/[Jܛ:]o%ӧZ2'ß-6-C. 0qrCA)})P,[9תq@<9y0%r@P"9E(r@P"9E(r@P"9E(r@P"9E(r@P"9E(r@P"9V~LN ( h{N 4@ 9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@X'朹cz&/@ 0jhxNRKr=F'0zr@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9JȃKxٝ2kKMUuRrK=L m5|nf9|bffN'ݏeg:.tq=i>; MHl9?+O׿#6/MVkB'NGjBʝo̫\eJ jȟž.d9\2m־ݿQYO|APk;ڹ%[,comW둳^5ʌ N1Wٙyd'UnW/ 驗ONjbD1Q剷ZMs·IQL³/?ooCl<Ӯ&ϥ;yy93Vm/J=+b{v=W'8[8b'O\}a7o< ؽ֤NS'iqq^ e Q7ϊ %!&<)T_Mȭ{#u]/ulm{>zaJ UHm8uF^4рW6RƉ L|uҔh9A1.5#/7][\;F]ٴOz.ݖ*{ůyv:kxZ}]pʥ?[~uuٻ]g'5V`k^Ywyq+q$Bmi_ ܇$8 vr[ń"eRW5CRCCšDciKUu=UO36 re?884u:0mN?zQ?XW_욯M`/?[j yH4߻Ux9fٵ}V%M_6 ?\+ j]4Z4mZuˤV">' _8q )m῾ϯ,7XtuE8 ~DNr82wĭ'@i@ӾFÓoCH~Iol%MPg-LDHHZTk]2(o U, I0,,2R"-sQX&vXbM \ PEAQ@?]zg[Se/Ƴ,SAKaJIi%F4~}{?5i;G9_5 񌡥lXs1O[IiFG`-zkh( -9}pZ9Oܜ-tGzmw@(=D.(w0>lX|z6n;0m#:fןKJ+40[fmw;TH]_GZqCx\fT\W= ȹ=gi ֑k_54ƂHcg I]z0ŁJeګB;YU% $Ki^W]Õ/57J iot?Ls~X7sYW)caskDsUaBYf{~wiRݿkkk֯Kml^j#r&~6Ϟ76~Za;ȭIKGM6=>gaL?Dl.w%[nRlC"a%m2BzF[[,'$CBAƋSn3߈6mў6~77i.Qr1qc sWxՂqupzW餀J%$`%E_t׽>x{9Wl׳8~j3);+#d &kݶvblG^VOW8:ki]V,+1g4^+Po5Ѥ̨xk,sɲW]zutG r>Š Q9ƣ(ͩ8LNxao{uF8ubݚr6Uj ic ױDUL0*XOSЉ{ױȴ y["k8Onб3ұy ҐBHI zT8>0˓ {ڳDy/bT@b"m5d8ޘ,@mϵo]W_=7gm)Ď>=YE8"f;Ȥ +T"q\Xd;~e>\,t0 h-vMځq;r6n@5йj6X:XgB$VRչm%>ҽW7Zb2%mzWK1t4hb:%+ P{hB95| }sD:J__ac.V&2 Is0'="H5ӥs@ֈ>: whEe|.QW7Cq=!_N&pf:"su[S{s\Ǭ6ƉG nFo}kr͒relej.7raZ A!AHWגX\OrP{UbUWK诳v1r" NjL5rkGgӏߌ kgl WGр M}|MnkC[/]]ϡۍL_Gl]-5=DG+Wum-Jsv;jRsO#%';8).p"@,S/H(7Sv* `pθr2%"5Zh& '@>*c )HPBEY)I M,!u1+#xo ] Q<.0zk sy ۖ->v<XV&=W}:w8X %r2&@| +Dˠ/йAy{Q 9t\.v}ȁ5RcV:cK&3r{JQf c4%8V~`w:;ƴs,٩w%dܖ1*ja g=Fnq~lkGSWIkJx葚*rp,,%:b(¦bLTqwr(8~1aTw+s(hAJb+ፒZJ9l`/7PQ9}OdBҐv?# wb> 6έv&3v&4:~}V$]&sIrfK`b\ix3UHuVu*aD3Q:Js5,v4nſ5Vt ]3\a?E>zbNr"}$p:%W֜1q8uf1]]@{#z2|N1^ף`5Ri,>X? FY9x(z;CV'<<̣ hVUucLk+~ZäIs(l9&⌒$Z˽ʎKH;I+Yr+%%_OJuy^.4o0㸸c#Iz&yAK %B+4(l*|IKiH;v44|LƄuy'f.c?pTbwHmևUl|u>o={Nඹ%ymS+ɽG]aiZ:i|ūoTLѻK#n׎K t?8|TL޳Y]NgӃ{f?} ׽GMa_2L`O֛]gs@JJ;v,ӸĀw-ʫkB/J)N UHAPtqWB2=}-S*zL:.čr2Uj>-uy 2G@b * ì-x|QG|#x )|"h*:kI2Ƀ#!iI$S3@ 4C$A>4ή'y-`Z: 0 go >\>_TVyin[B4cGT^axcoecab,M1:֢G5ADą}Y{= /ŬF?ydzcPZtVQ-¾ݼǸ&aJyŧE\f 6DZHܒg'NE\p BO<)odtK%"}؀->؁a}AV6>S& "Pio:|>$p-;[cmi3?c6N0EL>6E=/**¨DDq.KpS{3h9ڢ=9@F)Eʂu.xif*BE Jf*Bf.+WVP/[-"sL/5Dm嘴pBd RǹY:LP2H`7gop 6ȓz8yR֗qɭ\Jq$jn| z0+tWT2˟CTࢸ .Ns#_T #!F]4^^I>+r.Qsʠi.8xyҜ\tR֛!(ՓԯyZm#7_eb (]C \Qh`eh"K>윯wpW+ˊWu,' Yr~3Cr8f)K(22P#+&, f- BFR7ڠN=7rvY?Ռ:=ĨX% \ũd&"JRAQJ ZBcJ6E ˯z9wڣI(%$k0s)`EaBjh ՛}b1~ozD25AhACr4hTE{R jL=@ 8z/[7FE`$Pz70%8AJh ")`1Wz-n^TY7v*e]l>چqi#PmHd7!a#G.d7p6]=`-Fw6/F#/8~A BeQIyW2E8R{)<~@9K=6ڬk: "k@;݋-t(L,[3a ˁeK`d}uT,~YEѢ]TvNT[Zabp=T=WoZּhc˱UEX\f]Zg +}O_:f^ؓ\P$mSwA,mLd[\pOzQVDgz0͒Tَ77(N&C҉AHC I|o!Y#MZByS"4xfKä!AȅƁÅ!qs.AE`j\-Mђ(L苜7-|N74=9옖- TwplLDZV`mJkD+4+-2EA!f׍-C;_u|PEgf,Sr//%b`ZZ6L|ap"+dFSBmρ5C9|6 OO +|""P k=՚Sh#eXȰp;FƄ*+j`'ajnRi !)2.|hyxf\Hy0M&Z%9j])k% ]ę|34J͓ 4DRKRЎK#0)wa}ޅyg#kuQWn-볈s8p4+k*ꖽct9*~M"G([duCTqMMcĉP-&v}UY .! 9yq甇̏^Tsg/`lEY(N g`=urIn'Wf&7^ḍq}w|rr&:UcjЧ`#\$uz^Ւp~v~T(I8:ζ:]g% ƤdZ3i3;6=B(nϢvT'"ު)y]|1?]WnueL'o.B[&/!]=qٓt nV=6dZ˧i$kd|tul4rVu^ۗ]kT9KdLg!7}QA.E!}Aiߓ~ިTUhM"OW?1|???W_O8&M >F}uWovZڮt|ŀү; >kA\/fGankqxsè 9UQTkqۯ \hTmNSeQ. gM&>䃘j)Gmu]{uf{l\ݴ$f>"Io|VGν :o_KN"$ bkeF;ntJ1uI_zy{xK~]A{i@D$)sx&`*3@á=uB_<\Wγw\{dy]5[>DNJG*\'/K/$ zZbC[veu?]FΎ9LtWG'y琥 Ms7\Hyk*5ƄļJa@Po3!T)מ!z">ԼNaṞ^~PGEq8ȵkoIim`Mƻ|WX0oN=~m?/V Nbie09 '0zpɽɿ;Ǎ4ᣱ&A omTDɨh:s{㡽*?+Vܚb@چI&ӘT4lz9)W(cͻvRL'ы2K!)zCcî#n{~QUlpe8]{ r<=9̻PϦ_0a"L3J}Q-@;yW&A˙ yTJpB&%傩DU p&q.G Yjg<¶f?|`ڃFZl&^j}GȲ7n7TUQhH$Ȉ°yLk$B3%Cx=v ΒnNGc,͇, pۅrljhad0tS境 /5xe&\.k{@-9逹ځ'/ QaLԼ|S1uګU6R6G~;X1I!2nOPMGRn n"H_ERuU,.~wdhj8#ӴN zĽ:Yu6yibEG&)LF9j{c2ưN WJհ&z xaĝy_SiEYg"NbZF'OSi@j8qxh2]VekV'ADP̥`ޖ>*QU6V`un7ѳWW\kqWXvW5?<>Ĝ~n֘WrM4Q2<ϛyLpJ&fZ>罥B ՉKYJ{zǶqY&y7m(I^4R\ybKJ,QF#XX` e9KAmYNnDl =WE=F3.182z ={#g;DdVG*w]YNo|Ow&U}#DcNf=y ͺɭ~[uKSղvmRy{͕2H=0dh 2)x8 >PTDŽjIiQbL#* Q#[YtMJi*r(#PH-{#gdJuѝ06`+˜52U9 RpjҔG-мG @Q+;ME`>>0ֱ> m@ڀDڳF#b=b-^43^;SFҝdq8qs}J4!Lg1\fb/?_`MokWH8#|Φ,v.a\aJO?V?8j [dPzpHIJznQo /ˮr.eB]QyTeҋEɻZжՖs /fGAOEsbޞ]OT/ jz1[z>^\ +! _'weíupPUKad(Z9Cc=wbb.p?EK/b_sFSBȥ$(*NK/  mbJNJygRCiݵAr9N`=o iR~4ӺE^(CPtބȄB:er啯f7!]%IA!滭sԓZ_PV+i{%`! |H#D-~"֓bJ!l@* H1CӀސ!3GC @#$u1K@RQZᴷR3~qHr\3Δ%9X֖̀pb!I#PmPg'q9;_085X.Ih|D%Π(S-AH1%fyKQ A~b'S$ x#x>Y U^CAY-M`a0z3|O/uo}FF5qMAА][o#+z[y) ad=@ d`j+#K<3Np)vKl%LqZMEvbYpz*Sf;gBJV\G㤲=0Z /.jڙs hS¨@ .@A&-3qV' ^2 zDir:N|_1W _^ck;+π( u矂V3D YEJ(nDB%;8d_E']@lR5Mu>~ҥqig\$r" ;Rl!MfQ1ᤉ1vzq8-0`2BO}jXϦmOo/>|"ELkĝr X Tq;t;SάFbg@<42^gʡWo"HpC3 &}RET's2I )ʓPɲt ̈́R"A{:.Tm Ux5cф6./mr4 @9.QAC"buEH:%{? csRMB'3<'nDs+DtNO,56Д~J^ >n[&AgQq%O26p˩_.I^YJFֻm`Y1{62ַ9l-e_2_H~3=_~>tlk|+.ͪ<෿\q1jWI GqO캫/Fc k(;ݒ+%9ܟٺ7ntVp9#S͟-~kI䍣9P(ד p6ͽe8e)k 1̽my9HkyNtqq>=_HA5Fߡ ٴ1&崦fzpW0x4yA<{GUɯ̓fޭ5 lI̍:]WnuI4O˛raGq k$:G~aX0t(Q`CO W0b~ØÛlͣ.&5j\]s‸<@dswuL o&mfީ:h;?Fv __?_?z//)ӗ_F,zT g]$Hb2 x6u%Cs -Kz໌KNcg#x؈cv ? zq99ge= ,dߴ`QOO2aTl!}1rYs%">z$-E =lSƍ\Ԩ3^e$PJ-%hK2 r턕2<8|o$}oob;CC: |9l{Dd\H%ʬԎCU䍷.e̤HNC ].?H>G;>ɤs\Շ^}Ы/aoFeJh<.N'9KhԦ/o \|ѰxghM(v>(,YU/iR"Kc \RYEpn13,"5^&I i"NEa\}5>lzǤ,bټoA4 BtAN@` eto?͚12h[{-x1m ]mueG(5#h, ĸ%,x#+5W9XEt}5Q|;v[+o^3+_[9KkBc`r1D i.tJ(4^')](t~BTZzc)mxU#gζ'.X]ոׯ.%;b;!S؀Jǹ&)c>ahƸC_/>k=sOC5ގR0d Q\Xdp*T^kfE4 G.1֋!e9D^CH@)\ІHygLD'-iEy hTp{Td@DZ2ֻުۋO8=\?UUOF̞}4{tvcܠ?0ȫ̵/i!̇7y_vJַ]=H]hmܢfa!3msVmZ>+u~U`V^nz9I >P1@#ƀfR %Τ Q娀ls|Ɵa6ye{\#`y<|bY!WG^(PlLeNJw^%NVۈ&Z6va^o -e7Z_qr=q;: laIyKY e6h}`N\}qohNVZo-}>ns.`cq&h$0[Xy  Ql릋G~@ u<y,TZQ֙G_-th^oVٗ\1,N~:]!K!_zz : ŹGH|ug`XbIP&s*FVƆTe`S @q?FMU~^}f>^ٮOO%ZU=:ڂrAZ(d( ~TW?+- TW᣼*DsVH!UpB9̩{CW)e#ZI %&qD#w&@()Q P)1ZzUk=fu6IQqϳ|Af~^=z~*nln C@=Nc|"׻>>oU_J 9U%g(*j4xf;1\OzZm?P?.@KzDpj,qjE0%xaQ4NScp@ :D Dj pR9& #gǛ#$iϦMm$\ۃc3z:"l=ߡ-qmgugv얮Zx'O7w.'Dn$gnmBC> |os"4w i\g[vM%b=9jw|ϣ[%qWtБ '%GFI 7IʶH M|Cjv!EMjA :Y-:NK6H@4*Y`TYgV:-*1~OK1$' B,@҅Ӗ'G`b@kb,^NHƦ绦~lY{B[ԻD.*{YlKRWB\R;pʊ*c\6hmE}$Ϟث0{9.w0Ι \Z)=KIy5F%K FqD;\BjΝQzv쐈ɆtJ1+|p8zs(ưqirS.yk@󣊡[cV p_+M$hy8)hDQ a(s!0㉒\R$ɍ1PG%I̋>SN'NGE-,7'zk=4ޘfgmF@uMqۢhR, 9,9]~Gˎe+ұ-@S8Γǟjw1d{BH٨DEEU: d E9h꘷Yo_C3M4go|~{˹ž'o YIAԴ,NS@~`!µ#/oE&Tl8`DDbr|1Jہk?}z;h}Xhe#3|4!h 3Lde*HF]gOϼ8G/St)_A}HÅ䛓1wR~vZ |3@4&hdL]qzyeS~hv&7~>|wi<3 S{DT?~WMUȗU^E`9Cs(nFAl5`,BhCY+xb ARhEk%yMJ":.(GoCIr[PD,L@V:ge9  DB@#ryg'Zx]LKEYDE<]fKGlLܒ,#'f( =Ǘ8};Ծ=~rV̊ ,G$I9)FSJVm&B/RB~̍ vZ:95mpX@sE.N͈2Yb&L,DC.Y#u$}ٖ_c}E<3rSl,hHCL2ǦJo"DD8㴎);+j]'QzU{ӱ#&|d.I,OIS+$h'(MWY&͂bKimEt )n<ȱ&k:)ev.)-|:!ÿĎ kINFElkf4"qDA2ȥ\mf2r!qhۙWҘ2zl,*SDQ&$1愰")ǹ Z单F}gFJ}._z9̄(bM:~Ɨ^-6C4%W٩:^qU(Ň։J`b$cB)\`(01Ķ9YIpDB8f83|!8y0RRSg7!h Sge/8ƉU-Y ' Y4$&2dtv;2Κ|՟_5:c6Y෯gz 9_NNIH:e^A [CX̉gM1F<':S5S+ND~ZkDcU@m)jE ZPJB'8Z48ߕhb͟|{~ x5\h)RP$lB_f,!yff<`0A4,u{-lT]aHdSk9yM̓`J0P,'x4<:yKANYr`gAHkY-۩(dS-s|X﯎Gz$a.͎KW5D $tqO5K 6 4:gK(2THX;̬A:d`{,QW. ImϺ={gdxiRlH:1f#K|v˖\.33eS0Z&DAhk'`(%đ.*d@BCZtb(lo5PV[**ݱRRdG%wgq#{B 6UײQu|cQmT,ͽr}d[Õ*2QwG?@Z_S5*O 7:(x~x.z}I>T7ɲ5.YS"3P4OO%)~./fԴO̙7)^&Q5~(B|pzL@l? ^*V$ʑ}UÈÀ[92#460XXpп#z|5f7krάUGOrըZd\Ӝ?Q,j"2 G㣆hz*?9Dɛoߔ߽:wߞz 89wo_ a`")U 5?oφi:C6g=C:BS^1gcijy u0.x5׽q,.GO9pҘ>p Ml~6MBy߄ʑ]@ϰ! >B LF86_'m~{tݒ:/a˄g$E(epіk-rβb`HN$U6vNc6e%®ڠ!iZL2i\b3 o %HE}@g$)\N,ܬlx[mW*ݘtZŮ◞&~yBKE+Ԧ׾Ox5WBZUtV: j`"#KwK-U7 dl=[z/%0JQyX sH|"mhLڲyTPGϷM)Kx7Ғ`P܋lJTTdt-C!CHVaVu^''ם11$/Rn!,5Kj-O1}艚x T@p%=2$shĨbNnkïڰ¥m7eZ~s5LUYһ}lzHUàJ_A5Nm%I!I/ {p jDQ㳘դdK(vˢ'&bb]@:Z>FW"rvh}m:xn2 +Wa~mhÆZBu/ZS-:JVHu&H%(TW_àSixj6EbrSq't()d%[U;MegFVD9f/Ӎ8EwQ3$\q!ke(.5bYIdRpIuQN Rr=s[iBe2j$FY$:MX[)t+FΑ_zt=yf\ǷK">vhup@ʮژdoݘ{kD6MznF l&Q:I%5D2DžD[ 'BptJr  1kd4Hc,qjk#/iOu(EY`q2.9;qlH}!e->^hQ [)^dvE,dY*(+nYnh-~m9}G/S)G1bD8jGv$lW~fQ ..@¹-=9/Z3_\7j>]V{P;Y"Y*4l -!2O/RnHRn^%.M,q$e7xƠX4 }r>?qhNFMsn9ݓ6ްLrIqn9*ѳ@Q7 ̫9 Ƅw9cJimv'~ύVxyvgPiMYg"J^+ҡi|e9&:Pj1o!IO.\xJim"`g<輵6pj|XD48-N-w2sfr L !+7^Xitc^P3kmҁϼ:sOle0kڟ|:!| 6}`)p?l~:>9̀Ѿ;{7\ﯣU1=$./Z$f@Y3ZAC/F'21p CF.V¹]?Sp?E~XcPAlXJ:;^K %F_g~\^= ,nӦ__ۨ3otboc+?|nn^g[eӿ^ e@?%13'2 #ƘC ejSsUӄ>F[ʫs^^]-emZW0}2)?Fj|#=q?cwxY*1fS_5Wzlc=߮/okܷ(I2Vm}8mNm廢_z.h""J\SPIQJifTl-e!Yx@-.dٞ{u^dlKT).zB$lI'QAat{3}ٞ 8`9+rnc*t?jk. Z()S9m?ir d8Ms-BA3%cO%t.,OwbΧ>URɵ&ȍ䖧H; `KV[>vy&4Jx5Μ6|s0 ʐ 0#HSR9e8Bw^B=K圤3c4^'.t$ѠQ=Ì`nCZ>ڱZ`#BiNփv:MU .VD :#2а|1_5S?,?9 V,|.88U><{M}l^}K#ׇo3 vfKR#!.D"rZ#α@.j]kmC50/r\45aԽOvpkNiJ{1WHD$GKf)aWdgg&爺[iH1+|1wisnoy?UC*NJ΄œ m/;]p +X8!ǨE`шPQGG* d0؟HY\„$zQkE2qK,/2 "v||֘ec^C[.]Ll߄==:ЀՄr0ϵ1ʼEw>A|h)>jఁb77gooZ bb,`w-LŴr$2#A|2/sՔ9Q&x&\\+P\+DƦfPa΢PÜBxR9uOf0dsй@$KM Z{-v$LQ&5.畂dXK˭4$Kb1ri!E2^aL\??> Ga[#83ʚT"gb0z]1 *mxՊW+^xՊW+^xՁU+s37ypG:S3Qi G<p^ ((Sk 6H Rk A RkTNAj݂ZAjmk6H | s!kmZ_kmZAjmeD RQG6H Rk 6H Rk 6o8ㄞ;^ ;us'SqY{#՚!nC:pNz츩VkVV {kU,21%n90lj&J) n2^'0 2 뙱1I8dS쳍:Q%f40*Fa`-R#}pμ>r}^^grgV ɕ: c{1% CH=Go$ΡD#EဌyPnECCLb6HiQƋʹQu^p=ǺLkf+^+֗rx>>{_vXc`Fkx:n. Cv8z̨; @wq4eFwq:Z@hjXVn}{7~ ^e+cfs~mnǞ[ǬNF>=&e 9<)>lEPkjtÇWVHu&H%]Y >:fS.&M0EwB'@f KP0$Y:UA'O i递 z) KqX+CvQ KϢN$KӍrX39U\$#څh 0Ne"Hj9VHt6aSV#3Xzt=Y*\ǷK">vW!Vh32doݘoo=ȦIoO@b6(LP$KZ"B`!8:%9 CZ52Xb#VG#$F^Ҟ-QR%Tc[@)rsNx>-R_Bùw_?w1=?KǦhm=vE,dY* [ats p9-a=C=ZwVVj=Vq#VXEg0%o.1R^$C-sP`?Rp l 6$3ړנ\'Euo?a[?pb b2Ι`MMLy7eZJ_9SX-q8Ez3_ iPp?Jc@Y $G`' jHSS|0l<eV`6h%24Ǐ H( K;qO^ך{iNdxkr Fseb)X;MLq\9C8&QM.GrT>;{^h.񁓇wǪONj__ {cs(C ZURe,% 6bNMml.EMeKkq F\ 'H;-ht72DKk"0ƋykUJ:aA6s0Ht2l"8/T@S)*V)‚[ܴ`173 . L;0˒._l%YnҸuSd*,V)H@ K&Ym7!ygF/cL%յf쌜 vD)/ ;mu!X^>.S6C,ҏf}`p f89<'؝ x@a2͌'\R4҉]K`!#a:堉.c f^`0QY1s`f`29$ rP!<0G *Hc{ۗ2 ɘVZ(QHLdHAN A$53Gc]Xˁ@/iB1w"X^BpQV^'M򨩺+{=BSiI$}GʙrP>]59rNrp}]4A\؊W1F="SrQgDm:p0FF ƔvI+PR2¤5 ( e5BIK˙eFHaHt.%$eCM䒏KC/WKea@Z9`?ǁ|i< b&pC$#Q ({Lr:/׷'0dVX1 rAPQxVKK@w+gEC%D"6XDF4QAJb5;Pd4X:#gC9+߀:>]ŨzWR-SJ "hRȹ (Jc&irS aol/]շ@O0^"PyZFjnc0D ZrcСZA-7NzY)'Jv韚Zì),<O8OIHTSd{щ(tv+ĖwPJ!z2mupApJ))Kc\KKi-0,x̰@H9B6< oXd[f inl  %;-sƖC(b<FkI$0H"i"E9Ql1c;u3V:hA_&H 3vw(E"h'J $J; e \4V++Ln ":OrFϵ('j@Ԣz{REX,psfzwWé8 GJ]'TnuEA'jԊ %$F-(FL+E:RbNPy, 4Bi0ݞNn 0FƁ[ɤRRl#vAq9$"+L}% gazzniA10bJZ e\Ʋ !V9D83)x/H 6{ K9KГ)E]LN450(:iX|pyYqig#kqi\q]f~{N^:>#+|4VqԺQ,[i8NzR3ٯ?|lJFBbAe-~pfgv(b_;d*Rac <`0w9DRP;4E.8 CJ`pVtb քK e9mNۀӓEa4Mc,gFr<\gT$er6JMt||4z+_`tm$h~yk?n޴]5 5MS6iZtz7iWuv@F͗R(8n? "\7GO:pY1"$g b~QYwҰΩrT!|I1\c*em{IǸ+HafCBHr2-=V;9EcRB0eVT-&J,7BcJIϽvh]ar!ɨq,a"^ %%:dFu<-߮xhg}t:y--U/q$i|5`|,?N ݁^zoY9:0T*l}ʨW2r^{$%qٵG9,)I!5 u)f/ձ@Lg>Yo"jN$DgK*㑥+`"\K-a$EV 1ΒCݙ.ٰp`[ ߗӦ꽐A:O9iZݗHWR{XwQ`tY/7`"{&CD{fhp%J7bi1g;Ja8҅ܤijo6sVX]Mvm^7S&Ce{\"`ryDz%E^XzxfXyD˷tbIK:v v=P0hB% -TyZnܝ-^-ahۓ-4II%U9|wU"`DgH:b` ³'&^>WPT/' FŚ:oy6 p9\"|ͬ >1IuEeR{MyePo&u+DUY84+jNPiaNDtNz-GYC-B7鄷F_\K,C߃taL$#$,`13]yd[%4Vvd1Ewɵ3ҵu]="Riu&%]X ujBk}fR`h(i_&] V0#)6gF\aL~K7&`襌46k>elJa^iGjZ{1xm6`u `*Y# }mR( h(,)K 䟰"P,mR wީkjn~ZV}l0vg^L5껇opTm_ `pd-Vb#BZ*! iahPVz_gQe`2S.N;@]<y,![nuF'+Q73nXVQR*j|J#,RFXpS{ ֺqjmԾ&=õ8Vc }~z+|^W\̆۳9yuT83xnd09@Gp4bS 6e;Џ\%3 3PF.H.rĠNP"ˈa AHRVa S띱VcNؽ=Dk45[!-wFΆ7+ݼC>AM'eϖ蓕'\hvdfZxLB뢷վ3jw7ܕv7:~~[n|gUЦfY[ ߙ&R@ |̪YAٻ!oaܘ]g9 %SO5_L TJ2/*Q+䮫DBzI@7S ?(_6Ly8{&K$9ϡSM\cE)y~mqy>또7 6kyaa-K٩:2o|py j?QSM}cgmI 9 kY"MᏚdIdqnfyuT)3z_b]J.z D )G t,D.uhPI+TW 1ٟD.ۛDd`+Nirϊ&z?|V|sY4[Պ|H+[|V*X^a &B G E{/>DxE\Q GqcQ+!0Tb[!4}0_n_ o_{#9s9AH?ǏGRoN|t?Krw *Rp!>>X+Pȟ:dr֙}m0Mb/ӿx_{ylk8 GJ]'TnuEAg5h&t!L#&^q#aY,> yN6Ab`Fն2e'އKdm3 M6fhA\& b u@ ;\*&-GmiDQ8y9[&O-|(oaP?3u,No`z5 [/jw[wt|\T^9"(-hÙ#`1ƉVXњEA!h̡Iuf[fSx /s[Q[YLpw4G_ap'zyTqM !pl">89ΛxAg kh`D+)̏ ٪=(N g`=ubT@ ;hȍWnpFg.q1H\L\er $͈浽]ޯDGGÛ*Ԉ٨>[X0&%ǚ >`z|mF7 `#'|̫dRc?}-&Fbй& "F'kPY.^VWn\D%e-VU^ G|gIӫ@+) C_w zD;Uʗ fޑP0<-Kd(b=;åwP)& &UL(t[$)=Z&AUI 4 Sk y5tbl6J}vڛ뜮Yz>q58?B6fj%؛>/dfnd?_Ί;C۔J荡/5Xh}կ NL5^YUڛT/cRY"*̅SLۣzS񶓁?])慗&*i+ȼr@Q Y7B{tk.*$ Q PSі1ABGT`ɦ.<*% i!ĩ\Q[.J4w3H1%b o#g/SBu>q%隅zx>^Da+]@ʪݮ']NcnyݳbZ7XhVh69J2QŽP6(%hh"8X*:b**ku3`epK&0QAqJ.Ff6]+lyCjb2ۯ=˗,zZR\E^KLZJWDKYj*Va?r(V 5.-ŊD*Y (Г&]fWٵ]D{*oR ~OĤu2} ǥĝ#q)"f([ VuUs{'|6 2}_#LGTuWs꯽[bʆEuX93|%soLBxpϗ_=]x`Ztk* ¾5yE L } {K|E)R|+zpb㵫P>R.w?f,%DItH Яj)Y2GƭN$rz ,ܢ8 I6Li-c"qt@}SĄ^o1r6CNMnMVӬW4G(_K"z1>yd"McD"]lz?wT:E7_0YWu޲=NyaR/X{V-gE͕2H=0dhs;("Ă@BmZRu&IE%2:Ř'ŧ2&" Fֳ UQ*K(JKbl뭔8㱲PO* g]]$O6q3KqWuh#W/|@Fa=%]~E p"#Y\頄W"jf@t VmJ)Ef'xù];+ea-{[%DUJ6* 4$(6d$]Εأz1k(!J t@ NmR$}0(9mbBsQcozHH{{P,W 5"jo{gx1+} +imixxDc̐Z" *FZ+!zo>o1\Oh*Z!x-vIQTFm  ёW!q+Q BFR7ڠNqV-q́n I0*ޗvZ.I>(DDI*3tʠ%(I:YDF||R[MG)g}'Li+} _0g"0Vy !ed4*fjh_~o{ޓ/2dk i% (.2)@ %E~+_HOn(9 y.DeJPM gFu"ޙ!&(^f-wse/zobcW8-򐶀 9.Z/$K|q(id tR,K )y]=R[Kclqc uVmpH Q7_@Ɛq(>ؤd2DBTŃM&d&yGOj[ ŤcJ6JHA\zCc}ϑ 2.M' #T ''Y |Qtvc?uVy/J[nW Wt*^ Y Bġ;FzYVS*$fѲujWBD*ι.*۫.}Њ4 _*=8c%#-zLƬǬp\GW"7RO0Uka;GSyK[[/B6auz:ǐkLqKkO9c.7u\` +UDD̈ ͩ4sK76x-n\KEZ'7 g ɥ0RSjs$ km+G.v-@?u7;8cK$ߢ.|%6% hx*~ua}J6MjJR*` VS@!C>'SrG`['5m!pd>9 МF51e)hn(ڼVJǮ|B-Acosڲ f kv+Z4T_m}nzfܢt9a2oožap~G=o8|98velCwm0>k`{6}t>9f[͏k[lt|/-%%rC?4pUh, , #w,XUYQ!l 2"7'C, >k4+:Ʉ2Y",:.v"/AAY߸htF5bٝzY<մ'|zǁzK^ؽA;_ʻG_e_'?>/C!s2^ 3-8);:{[t<)۲הsڕDTc0%p-t'iJU*kS2xBB+\ 8OMC1,1⌏W|y>PmKnR^'"F7K61&h<"wsAJۢ* vm tb53c͍!>::wP*Gmo+*'s?"ipWxHH>6Cx>O'͎ٷϟA2VD1aa?&ړQzeO8X9XPy,{lWe ߭y}XE%9{%4FV V:a>A ԂvH F zIS]4R 15P Lp/2 HA alIcl;#/.y*+J&drBL4) [ov_ms\(eIWyh[CYKR %94g+!Hהh(r,؜З1Ƃf<9N蜑΀ESl҇Pz-M:J'z\-4W0e\!A69G=ݯ: h"BCQQ.pu o F i[#CAz!s~,tr,a[/I}rCg'e9\)Ͷ[B^{8vttt5uJ\U`s:3qݩ"Wiz-2IIk_a-\:sUUx*Jk`JiWhX= \UOiTI |J):zX[_`WR G,2[?ޟ ɉa@W|psz>4$o?_;0ս4~([l|Ϧ~ϛhG U[ HybG|[/1+VO Y0TCcr6t~uE[)kXh UrѯoٛglSc?!4A[y:h{P,4QvhJiW&J%Tqɘ*C7WUJ۝hDqR;sEؓ1WU\b\U)Wj7W\YP_}eɹx|u_͹quBksgoVU[n{Cz-ɤ$Cs щ`p sVו;k_[dLޤN1dk A-h!LI._=g=B+̭I8Ȫpo)Vǟb!BPh]zsR4h\`\P Y l&fc==j \ȵG҂G T>mmkRNv|u/=>c/I#C؝N K٩*- ٩R6vM_lZMiu6Φٴ: a\5zw=5n 9Q}rbd8,F}p{<~O2߆~0~0FOʘ7(PTh5(#"!,lσ`lzgL`92Cp1 (<%g y/; Ѝ6gǮ* ]}jה|M\gbo`~ om {Нw&zuE·6qY.-S/Nݧ_] +`#]Evٳ3W y:';}KbX*[ۛF3=[7OOs42͘/9F2ui7=3>".+#7xa廛5Cڭ)mpu[&N>_GzC\r 0-%܎9dg޳Ezw:T}s|s+NԝNԩ:RVJZS+ujUZcVJZS+ujN c:RVҩ:KVJZS+u]JZv}V;RVJZS+uj׎@2ΗWɏϡ8E$jFdAgZp0Sv*upq0%p-uV'iJUS2xBBj "uL M.<<5p ʠ_8 Dr_5g;8"8F_ܗ}eu2;eG|fݻg_L l0;A h&ھG/X@tV{n6Qz{!b53~uȖo7rp;^#7Goj)q1(NavWL*-RNG[^ OV_]n͡[ gSڙW;t.ِx|WTQ‘RS;A,:4nĈzruѢ\1V #?)1gh-xW fGinz-,{.7ޭ}yfokr yoQRK,?ur:8\<& I##YZy0mS{X64UK2H^t䃔yԐILMlC**HNu6ud3qÙSV P,bT-bM׭\$5+RW2$gL@H!Z:kadH!bdI껦uMA)n:=>:_LֈI}iaƝQ^O Z h~ dbϭCЪȜdq.yZ}VR+V/m_.ni@MMqۢhR,FSv]~ϐ,;,[- Z'&Ù؞<=#é%Q JJ)j`w 5-nSkWIkW/&̵eݠ5]fKڲPWgsV@j! oDbEH(:%{FM"<'nDs+Dt.,56Д]gJ%>56 bu>+xUOКA!O[N $ye *YdB}>KY1{62Vgǿ?IT;y`TuԭQ-L'8,4'vp7?ƢiQy}*uNC w1 |(nUh/A@ƫ֌Qo۫ojNtxx09=;TJIr_}n> >c }ytr$8?Vw!WPvt .Fvy[br^x=-a3`N( 'ͺrssNb4r1[Fԛ-0^!fjL tl|mN3(`Ck+W1Khx5]z`WYY=#ͺ[s5T95N#O,}>>U,Ʃ,bk_*䊗Jĉ? FN޾?}޿=ͻ#y}(q+0>w^f j_Ԋw557kXgjW μO^2W#fĵ1;3q[_.|? ? *\M@듟|t-1"d+Q(iϴ~ oCePΠf!}1Ռ, Jx߿'&=w}tS_y 7Pfv "ouH-#g&Enטux]]7U9ji^/=OP`|A8ʋyԄdX1P<qWoz~`j OC^WE¥_j3xt-OfjNc6x2IДg)\Uk: pfkS6H{ߵ11x Y侍]5'{(z3gFmy,Mxo&[ kr:,~m.OEwѼ[]l:twh>38U = ׿T4=/"7EE;mFͪZc3PǗHm[/dp娜PH9t\-u *e2 "-MR&ij3}omsG{Pfr>d)KԀiY_"krUWCmE瓓P\pEL zl/VD %p) r5B΁DVHu mm">kNg8pK9*>8Lǭvݞ}߃ikҷ7{])}ɟW8z ftV U!1438HJIaRQ1xΒj=n 63K|N5P'&-5Cwv5\=rr{ֿ{\@=a|y˲)t ]<3aDD2D7^Wzu!v( {Ԯ恻@@v$mqA Uc1Nz.M\ t XX8̒h{U۴P^VO~T' B+>揸W@Ԋw;ys}--Gm9[@ᘰo̱U&iz^h* WwUu ׾}+7ip|Z4z7EUoG|/.;G՝:~S?RZF[k0ctB XpZe63^`OhoS&NgL8lP"Ciyf3nY 93>32esA]O7% x૩٦#^I%Ec$(H9w%X#KcC*)xqk7|@:t腈4{!w?dv!,;Rg-ךROmZCῼ3&H|2[:m.SnQ1qY@~Fbͧqk.N^18-Ѱ2*F;~mF}4?6ï9eٷ%A[kX˅mi=E &w(twr\#ɮ`SŶ`#h9/06`z;hvĮZƶ]!Bʞ]@v*t,+WJ+*WrB%+ .lgkl;PjֳȮsr!?7-*m *y7J^7?_W'Ѐ9i%Mc#@)mn"b26=>jt/*; ne8M}DqYtb>7/Nl~ґGDZeASdFUp*PnjqzJvR͵x?DFǭ}~ǝ}  ;бDϓۡ&:WN?Aiܺ,֣/) D+!#ȑi} Šns !zNOk0lU*S$}h:<(SV1A0-ړOҒBv~i1WeL"ZR^l(ٙs fhaE2sE9 :Uٿ߹#,Z~ ?+.]0m(I_b4BВ&ή;ECRiN)yd@%CRʋdR8ZD8E\d\Xύ$$eL*$ <#:j>1rhO]C Me }IUfqҟto-?hݜ~|3YĆ3޽yMu{:)|[V:IDT~S~|vUܺ^-4gtػZ {,|BU6F)D;MM.x뜡DB0*QX#T.'ݽ;\{%,|^Fh%NG@+ld$&hP7JD4DRƍ@aHGq"ig_!Z)nٻ6rk eha ffqn \40%Fr{83G#mi8!y~ARݴ)(E"2p)Pp@iG9SgZ'yU2 %ZR< SNqhV[M*Ffd j0DR%c6rKV\3,,,|w/ia%ﹻrm0o f4 [Ŀ`=@ 1/>DxD\Qm# L:Ǎ"DSWbSE#aHQ?%IlR!`V0/CJG@lh]I{&9%sy,R;w+yf{[8:RKJ0<<6B+'8k8 Aȅ/!fE;>r S2 0G:H;;9a/NX$b6W"Q#zKU"jo8`NG!1G rz ",iP΁'nN232f.;݅#{Y˒{}e j.kIuMIlF)z94=b{P9 _#W'H9;th436] b֗LqWj|Iցq^뎆{X1rb *^t.3OuIPrE`02",2Q\G"8(Bd%S겺XHZ鄒1&%VHAY&D4rftE˽蔍J@9 ?'r0cС+Al7ԼjO)кi"}xf3s4JLJqYN#N œs) i< Az!(cl-|_{  HLb2X hL­\BkfRca-Q83(* %B# b)X"D)5B䊁RW!H:;Pd4͌l숳W`V3|ρ] `" N*G0ArodEc 0$-Y=]P>@N\) #Z*0ÔHmt &SZXsBKNp :V6SF~;:)'Jp 0뱓29 Ϩ9(I|l/8*WGƓ' !8o-]Դ)k,r-=~`X"`32"k! Zfm> 纄> Xd_e>GvC)0ka(2σh RHH@qN{+6#됍/-W?9tLG59(0"sҒ:R*CER`4a4ƃ^́I:|QѼJIc HitL*"YZTfi%D@GA}X30y&}ʽ0NF;0FƁ[ɤRRl#vAq9<$ +\}DJѳu3v=.մ 1%e\Ʋ !V9UJ#xҙR^ 6#@TrI#gdJQ b1L.N4_$~~e*kܻ2.|{Z5̊6?5zq/w `p˷A `Zh4]3H͜Mqaf](5Tj D T·#߬QwɴU;lE@<`pw9B'RP;1U.椚< #]@0! )G$./_O.kV70nCwV%. Y"cv9|a`+uwᦸ2 K|񩭯D`R$8V,]yv(-[ ™FnMs)]֌J?m)cTsUŪaYsi> q4L]z^b$2?R.6T*'~m*Dž^];~_L|w~yu7g~qwqJᤋ&I}MKiho4UlE^K*g;>[A\;CT5(}[8?\JGtw~҆ӭx $ǾWles%ܰ*wT.U4*wk۴S-HmPo@E0}]ך˴6FcK~f(A_eZzJv2-_s<(2` ZFM4TYn{,w>jBᱛD8_i &|&: nw#%*rP"Pc@֘~S'˦q/$w't)ji]/=NCBO f)SU 9E(&/aN(EKf-D~[Xbrkv]k%Pu}:(=_/yc(U0e@i NQnMm(\v*x|G_}w1]׻bg6v3fۓv_\V0O.+l9 ; 6{+y,3rkO:6./'zs+~I}pI̿ -ZplFƔj2 mrz kӢaUN>1tQ0%sH[]FSUbeP嶖my▱'k~> Nmj{M(&hP~mRqV*'s%\ruxMwG~ͬ8Q<\{P}ZLo͊.?typH%.(!:R⫚Ě2[⃺]+KYNt*- lHъ$oGzP\DSˋ)?3I94lzSۓK9-Vz&~ZXsa=N8ꏈmWڨfL݁il6!#gVњO? |xz9\2śo|2]+y\pGV,W:^zoY%:0TR4 B5:Ɏ&;lbmb>7*G>Pckj+75>2fq1]bl-kF~_αL@Q/  ڕDꥦ0+Cʘt+fדgf>*-Ь.mz7LՃiգSY㻓"z9J`"{&CD{fhp%(XZYh6MR=@o O;]v<[_[ -(ni6SmUem\=0w<<2zP|fPlcSK諾rJu/lB4 2!BsW.z)wLRx/64 px$}wӍ侜ÈVL(cu{E:Y)v(6qbuf[IJWxqXcR >[*2~75aUU$827Z{N^SiEY'"rX'ՎB`i (_U;L 'F\wr Y_ )˜HG&HFYt:cgD,6ȈJ i̛#[`Slg -v@f˪GMw!jx)elJa^iGjj!Z{11V{]G6rճ<pwrr2ކ"%O_q1rIACmo R޽]k꛹unW=hGAPnn,n&ԫzhP.Ԉxp:1-lg9|3 /}ᕲAqɵBcFj1eUB\ʕr uy&Ų^n5X Mya(x9 3c4 ģh\Qu5W0ԇ۵ GbSυ8 * D'.AdmBM"Pe?{Uwۘ EiH>8iKQDd,XS5Loc\st=o[{}s'tv:6lہd^@盇TB;\X2,X+Ή\6o qmc S#ynqyԸC¨*\~))&\!E$#GKf)eʞX?,bޡR>m8]>q}6G ^E'\րڵ֮]{vt"vĀ1q4vtoRެ<+@b!VH$GѲ1-zrԓӒ# x$%B$I~2#E'Q#,D(uIRx+1q)*^rel "V&{9~CM.x EJne \0A$ATDӒDpp(D]kdcr;QoQ* y]}5"1-d2](e!hWYu$ ?-~YgdS%4''/8>qGk8mINT(qR ڭBAt_(hߩPgW(0Pv6ps.ppTW0~Fpr>* \*Kd*KeW/  =Rz.pUK+}݊Vb %sUI'.o[utct{ ]֩x<p4prQ/AgJ t Jgq:Fi)]G,%c=J@ qٿφW5,zS; ney5.6y8PM_-(,L ;{02E2jbrljoÛbsxs\埿ݟw;_M ՂѪચgdit}URjѯ/pUk~NN@ȳ+N@u\ˁ+e |,lUVѮU{/45",+__瞆cleѺr3 Wd)H#Mx5heD(YT*c!hʔ hB'XR*8pDzo goC@o Y:o纼OZ264a*эnLŒ 1rύ}-Ҽ;;C]Ҁ>~,\_){#+КUS" '>!فSefO=]v`gI1Mh,qˉ<I 4FRJ"1Iu #M0ʄ9;HE2&q*G9W'pNPRJ'-&QQK ~ vicǫ)j͖q!ɝZEOtLZiYi8Ƨ'C!rUޟ);|hp;}>ר:\RW_ژ~[8T{XNёݧb?jUO|vdl~ןYv30(g珱s]\3ؤ,k-ȾybZ0ӧ{8}SPLCdt6 |F+l.d}~x PS_17!uw=;ɽ-l8etVv1;+Ӭ=GEDM7X[<& ET1Hv㔹sr)$QM.Ǽr Tx?.@ C[Zrwn§Op>p4w- j#WI9bMR 0D4Di[oe($H7-x2:L09.(}᭷bl7aRߘ%4N*7I;94caV~=MVDoP߬  k'fuTj(ɠ`6B՞ jmﶴݣtګ[\'+bR'"hpP!勤F,#&rO$_B Be (2JR9E@4D@+^{-A1蔸U1H*TpJ#c1q#fr,,b!/£bRܷj/h* '~yo65戝 E!r$hN 1W=pt^XdLFKII`,bz!6ah 2l2Nmp #xK#g;b(ˮv1Ea=j v p$zei$B1 Q@GM4CWNit֨Q42-U>ƐM2TM0|ܺh! a6T\ _CŒ1 @կ;f|t< x.1՝K`{ɲ>زԜtjMl^1sЌ@!RT";20$Bb9Y-ʡ/.唗Zis"$*,3Jq+/VLcC;t WZ븾 S\{3- :)ۉڃ[?םrڶu_Ư;YgvWCW/kElhsy{! 10f_8 %mB7F!iL`A4M Y$|vٛH"]{oH*DYnia 82 Y,觭,9s~|Ȳ,JLTqb.6*dV ((OzYG)'Jv鿚ZìN,<O8 FISd{ъ★ugR= nwCx3mupApJ))Kc\KÄ_&,x̰@H9Bl1"LsË̹Xd_k  $O-sƖGoQx'EH`EDsڣc:u5W:hA_cf 3tC:SUۥ(Hy(uBBlB&aJZO63RVVRDEju*kQN,ՀEQ (w,qsv|tZWé8 GJ]'TnuEAS&jԊ %$F-(FL+E:RbT{=8ׁuuLU[VwLY)MT1Ǒ0Qm [|3 u" tң}Wr fS(;ޖ'-F_\v;]A~c:a0& Q]. <2H+Mi?yarvKLp.vn.{BͥaSu6%SW8L@.ʨ?\O?ҫa[ ^#1JBLD`A"w`-R>,lVro1FV,avE!ui 6$֩OذM>jx [.h{tBH WcK4ٶB䌂~#"As߃,8"D;$)[ 5ZvŚSkalQx_;ɠ)=ˡUqydbfWcF#+`+_ciI4H \q"녷wTOS8 —<_oW ;8P/g o0@+8:@.AX'j zqD%틋)DbGhz!D%!zb<"ufhU"YUVȮDTza4?"ukJF]%r?u*Q+RW qyϠK zŧabJG9/8]Y#9a⧜"'?wmI !'{ ,pЏj1E*$e[Yw$%$n- ~= 'g3}F֔ՔVTc2-S.~kKj^`FK#}ҙZ!LEKZZ2b/'~I ~$gCrHj8.1k6W_Xr琢<支)'Usk=U m5ДCR6k_2`K K˾-MR" !'~ڊ{51{/ۜymԒ[ՂX[-EaNeɇ3b}:.AU{c*frdjuS1S-q I!L䊽l3ZTZU7 S=RW`?%&({2m n2$|JROgT/\74^TO~90?FMٟvƺ>@:>0)hmA`71fq߶< tWBqhcJ): 1DC0e-OR>Rp"ҭP%= }ăxm3'1r3NZ0okQZ*yq@5{MI:0V"U <Љi k$f ,quJ[ R+^:q6/eo-xIDo/ťTRY\*Keq,.ťT',6A ,bE,"XX` ,bEM{f"XX` ,bE,`o"I)"XX`X-#b{2{i6kR)L%#,X k)'(ŵׂZP\37L :y!C9|甇C7ù_'oٿFp9+ w@PFDXMm-}Cnyi 4s &lwi")@\UQ$Ͳ=>^T>N?"JR#&an_ IaF%g}taFLsmBuW^7M?8Y6Vcasb {yv1KAj0| |b0x!Y7kGrHa1v0ˌ6`'%׍] ?#,}158c*j؞k/T[T!O~Yo^B oN_|*|?|WoN)3'ur g`7u$(5 mG݂/͇nǛ ͍dh[O'|qMf/GPӾ U?_~~_mӓ|1fGAWqb~> ӯhM,sylTaAxGT|psoqtݒX#5I${$ՑsokI$AR8b W`uhǍN)1IzhLՍ'HGmrC˾Iƕ"]|)` UFg6ΕCNu0ؒ}x@Cg{3n۵}a04Xɴ:?o$zkߺ9 ;Rj#~wۨy3R]; mZ%ܭ[i1ɤL{p3ŪYn|8b^xi&Fk'SpFJd;vA3ۡ^l';^ x0\FłByN'F[ƌ Q2kQI&O۠&LGpGm`*Qfs3H1%b;uFΆ'#B}1~ڥufs]zD|ݧ`jw>v%ܶxY 5nDČDuQa*'1D)@CÈmsKeBGY1}:28l%I֌ɠCJ[wE x4NҺo,ޞƧh|[YvYJg)U g%-c \?B e6X(B-4uFw2 4MI{,cw{$ڣrV9t S 71idq,RG#kJ,)li-und4|zx(&-&$gbM>^>j]lׯàWuDPމX m5![ec:yWFo0+\i|:<*q4n,=5Uo^hJ&fZ>罥BͽpՉK>"[L;g1˭] *$NEO?6$JSKFjtޚ'VOEm4>2nuS| JD$2s\)x8 >6HQnLH;Ռcp0B0Hzd(V4DEwJ*5cgl},3x.B^t9E&9ybs|wǚ8LVO~{C~o<5v Mܞ 4N:y!%[ 4t+H(EI[s-/NC\ds&W:(ᕈY$=blbf8 M2]sA\r.wEkwڲc-.=v'M@HA b8rڠF]R4ө>LRiz#)Dː E A1@2ctAs$;#n}8!+Ɲ&%QVQ˩RZqZB謡(-72gҔG-УD=FE5zLh4u'ȹ[#8/µSήztuыE/zG9_'phݍ`v`uKICr,L}[}cOd9&9O3: p L4YAA ěoyq ~ڍ\&~3GW,x|}5L_`;r }Ǘ[QK~\jgLQi,0 EC")ˣTjFΚ9nԱ黎UAR]5:VI9emwT$ϧY^[t½ Smx͖^_~3 ƭ=_~bj{0`Vȍr4'b4ːAS?K/Zub~p,,X #Cm5Oւ JĮq;E`BAIQTF]/  mbJNQ&Iƣ9 ״eκV'٨-g&ļRTc("y`e#JEmfz%1(f(|v:lj  }mTb.>ŰCT9pIМ~9Emr{I87 @Q<5&^#I9\}p΁`6 `"yzWDe ګbԫ{8k ;"nXJ9먟>둢 Z1N0 '1VWqV}J^ m8ΪʢP%L݇| 9bvGc]wtYڃz3R7@c:GLSւ"8Q`%B;yo]Y v]CDuhvvW+>r64|ǃ g\ 6p`0~7Nrޛ }׫OǓi:lw;|xav:;]4M͒Vu5 mi=ʉYq5.t'KQN\W mJ(+hAéAO9T܉ZJ/@j湲M#΀ ɥ0'HNLK^tQQ 1P#xA༷THY"V'Peߟb1rvmk푗$ۗ/m !6}f.[nWݻ.Wȍ 3uՙ@hԩ Rx_nu5o]Щf7?ʝcܮvizq DzE+-7t2v'w80\-'[y IܖxE]u\UT޴ӳsƺg".Xnm89T ` (.N@-jWѺHAH[é:.`VT@l奷Ub2f^GӛDkz!e)e'R( DWނ\$%*E`%TInsV9Hs9t'\^A 8֗ ;׫k襝cV=cPoaJe ? @4`* Ts$.F߃YV56]oC5[?ɴOv>^D}qz!E4qN,J}[MlҊʓUWe(05uHc8q{c89QAdjʔmiK@.:RO|O؋S<9>x-ztޮ2pfsJm8`=I{,TޅӼ>s!q6g,G_=\UvnlkV$a^~8hrTqM(>|'apØ_5.Ѡ1WrVS\]Ǐ~tyPP ~<#[EY)XO] $z ʨԁhד+KrmQ8e)Us3٩;<3(E,jk4ׇg_UJ%9Aء 1),Y>4~:u8=QP+}\{`.]_98mz8:Ged󨋇YWR󀸐},aW XsҲ|UNP`SW{7o3QE]L1/4IVI_BZR1}>X&)KP,-zbe(#Hf#*h&I=#`yJ n`Z$bx.qЖ eV1'XtGji"ƹDl3b4.Th|yVg!o7M@R! <%xJ(hx`K2ZCdD[YQ$Liѱ@Fr4|>OĄd1rh<61+bnߨޫ D,WE1T@WQWsL" , RK!$ZT3*@\@Ad(1`km#IBuyhC hμ,ddDJˤMJxY$eJ/RI*7Yɪ'2b&PII֤ w'fq,XG,<)0|u]W>,MQ_y_#vAֳg섉!ڠ#PV)])iG%@"P ƯUM|vA" (ð]TDH+q#6̦\iDZmQ{d;r)},Z,IX`7ZǦ4r>&S6%eÃWa郕l^4EgM,B)b1GUE,Ef|jLx;r`ҏc5FD7"∈cgEH>&Ŗ=뼒\t2jw(#ARۖA aߘ,\,KްG89rrfOZ&N܍89E2AuҒcq14E?∋;3Ҫdd)b0!+ڊx%o$$UnœPv1v[Ts-,C?GUrNC$6pȧ T2>ㆭIb&%Urv&h%F2V%<6]P>3x@퇓r,x6Fcm%BQJ*}:0HF(gy"h Rܔ >12T+pYj@Q)D)=X&9˝9 1tH]UlE<\ښ^:=O[6}-s}3_ /AQ96BY'+Tef1ךf/΂G;{KKH$R1,8mDšCQ:(6=%b[;K$V. ɊLkĻU ЎaX"x&-Rԍ8ڏkyCQPb4H "2ZL6jfƉ؈ څZ=؆qKLC:k\^ԉ56GAJ(t%%E2!%ۄ֒[jȅ^&cmI+%͟g4*T|p$ߠGBEABirN@'Ϊm`*e^@K}\^ ՀeT *'[Ͼ\RUn?㥪 c%IҤbd$AeS 1alYՈ'A 2f 0)RM%LBr)01`T!fFbr P>6ƛ]F4@JDAE{,V&v,0kU)LP珖}ٲGk͂I}>?`oyvVOzOS}D|[]`MMo[׼XpA/Z 0] Q(AP W83"KC9&:[ymmvcd2[SFi<|Ylsi\y8>1r<{WЬC-jmnh4պR$#$;U„URYו]jf-C%C.RڳCF|W^ z8Tr1ø8 N2kw=EE3J9dUH\uWrV0~otPWaH~钮w8zuey$\W %쮧WG,XO,j܌.!j?&vdcR53.zɜ,Yʄn)8 p_2OO*Q1  Cz7isoO'>v&cFբDg|,ZCgRt]LXfSc {D!ORo74:ue $C2 $dE`#:Pc6Ѳ'oLyw8C.ms|\ص~JԊ|sNx\ԑz!(߽l{wtXeC;)N~GjOSuկ r޶LC}'r 5gZBoWM4SӅKYuF풧 U0ي4ҩWZCS9J.`z_׵s2iP Ndˬ Y%QyFynyxDlQA2!sdyd*N5T oLSWԭ[>n#?=^‡]ȯBm ]>JoY KxQ`ut0𰰴O賽v7g:E#w|6q1+?@grX}ǯk˯91&0nx8xwԜUZr>A][| y ?od ]%w@?ZuS; o `Hҙ,bbWx vDUg斞>H"gP1k#R5Ec0QHguM0 mAո>9 UJiw3'hH:'Bg(J57G osݖn5zgw"Fbi'u‰5많Β$h5$PPYl&k4mq6D|u-8[g|0DPi/_4lrr3"^Pj0guThMw^2_M`l7xɕ aw෣Dw!abq5gǦϟ>-BWvq2t0|l`teNofoofofotfB%AԶG58kÓrN`HY@+^&жny,IE-ZM$I@is2PtESȒEp$ND8-q.q͉6^û=-j`=6#.ZD\w ̒n>_UipvyLQU$QLA߁3+E蠣v>4$O <`xH$O F$v(.̔2n}um(bBXDdBRz`(SqsSޙEYx"(3JQ(ae ̊Uȼ;7;"f){%_pz=xî ) K&2ѱQ D=kPRHAH2. Y۠O϶*Poh1]Y!`hJ3q 2K@Q S9pJR!@|r(KBT%c*O^GU4+1@Ђ*/BHJ>%H+q؜rk*{ٶؽmAr- >Yx:xb)>ڳҢ&O)]L!tT <%S7ܧ)kןU)>g6A"HTҩbU"3d ZR'+s{^H߾>~~l۷mٚ>ȋ'bnn3o8'bGƱW).^5:7U ⯰ůu8_՗/gWK|&뫚<$_V79o9^aSQ?=^i AgUਟ вQUZUJG}@ktMg6ҙ]dz7/)faYNɿ 3_~i2wl'e'Dw¿g*]o9W8`,a. ng0` >6$Wl=8ȴ-' VwEv="?.޺Zhk.l8;zuL8UPLlHAI<>xLxZwgW!)K q>*=$ ONŌRiRkzxQ=VV888ژ{8Rǵǵq\ǵq\5k~,tD/.~2E\kbܮ/.*opqQ.q!%Q%Q%Q%QdZmd'QGH'ꉴz"H'ꉴz"W8ǝiDZ=VOiDZ=VOiDZ=ī)ZΧ9z..ze)[uQ.$:u`MzM v^_.j$ٸG k8on7t2F4uE?16W.%/b\s?Osi_lL3,9_6gCUf)bYHB_Z"%f)FY1 %eE^~JE$c J7RBkYĹDi!]}+yZäf}'(kKgR-I%Y8>'mtohLdΕzϖNz3=+nOK^&W}aJc\f9R'$d*2"mX) 9sSp Qb,sf,<[L9grR=c5q68F(fl Ue_(/W_mvV׬/BO4|oh40>{AP@#tB@,LLD2Z; P^KǖNfEP?dR)@Rdca YWcWv&t)]M;ںk7{kF Y 92m`Jr؄RR9(Y>crV6iN rEYtH,$!H>>!eXa5qn}RǸ+~lMehGlY,LY  ˗t^lu ɍF*x<g&F_8c*ʍQA6degB )4he^(+*kWM GH:_v کZZ_tm~ON,笽"Kw@.Ld2r̛$DQDm=]մc[!lY<𸏑>e%@ sDO|܀萏DJ:N$kL$S"ǎDӱP>؊J\+5>e+YQ)v\T@L@( 5da4B4&xgd[T`q*1&Q&CD0BX 8Ao=YM LCwa]a/LYmtޡoaCEPqz~QIC#(2ی6yԆceWBsj-N#,>2d3Ṛ(4Nu*ٕKe !m3{weaʬU9%rY3vܧz{7gy1M||Wf6NO|8 gɽ}` S*dkV o=8DosZ !JUO>gQI $ekZJӃy0wZ#%qOxF{9=A^ftRmcٲ~ _i܆pQhgE|͛}IGڞ/\2f4wj6 YB+`ng'A-0қ'Y=^~WfG^\f0pG÷Gqp1&o_v犱u-I%ox]3b}3v6sYQ^&OzD#XFDxtE΁Y۪`7Vl0iΊK( kRNNc)&yP\>|?KK /qxrDrE/_eW߿|}ȅ;|_/_"_hf)(;;Y{M[YijoٴtjM^/|v].i31L_8>U H7>~?N? u\W~ʂEĜןDxd}\V{+zzU**ޅ8,LW~d=629e%neOėwW$E$eh,qβb<' ARTAؒmXvA;r%@d2홎 o\JP(}H`A) # ޷E*:_ w'UZW5bV/-_2Of尧iG4.h*-sS. :r!V86"\kX2wY"x1}X) @I}aꂺ] <c"+i*oak/<}l.<3}O_e!}ON&0ܿ?OHR >wVܳK^ݤT좗0IV#y8+<=yv0PϾ[.'%B ]}ҵ,tq맸kL|Wݒ @Vhj@ owѯ,Ⱥl`WZx:w?÷gs нWgW H瓳ӕ'ò1woZc8_~6_bXLMP)U7;= Pc](:%YjEByr;n{6/TQaEM9zW?7ߍ'k&-i}W֫ 0۝S>B]4[KaO6 -| 1^RxDϓ|p܅ `J˺˖у$1{ڰv|ò⻶ay-[R,(c@'u2"*L,jLZD.[!m5D%]T*f]HL&sFx%;%93kِt|({xKKwK{t~!;-Οn>:6;߳y=?2i~*i`)2K9p Kl9 ύ JkƝ*@1s0Xh8Zp3%b sRywr$/(\FϭBG#=USĹ݂Y_N&d;L9żeYҧJW,x߿Bxܿʌkd D:TRI4eIe-{@*1A Y:2Dt_S($1L00#}⡿z2ش+ ?fk'|9s @L-s~]Я菷S6]N6 IvJ#J|q/D[VelqH޵u#l="Y\ψǿ}|4Dl] F+} J ˈDG>P,IթzDxDMքDJp5W9 f ;ځlPM4"HIP^zBeX8AY]dљ o9ہj_F ? %_VnEyg1r<{W2[!<f^C.%m>#} Xѫ%襾e^W-G š*Cۈ"e7  ^x1&퓓1O_8[he!!RhUHƔ\RDE6l[Cҽr_}R5{6,9qt'iDwӝ7Ӵjtd˜<ޝ.`c]%h1IJa+-Fmg˪G[&1.f9lņ b_;muBꗋ'JpƳKj2R!SJ"XTX de%VJ w'vi_f#^W<].M2$BF S)(&S1*rUǓfqώ'-:<$gbXϹ]jsn Ky鳼( UJ R Zbg#C(ا|Rw2姞o o-7htnD vr2K-Oݼ ELl!j#hh]dj,ut&MRV뵮ufgpkV.'0AkWi]c!zҔYKF -D V]6ǽ5k}R=+dwVOsA=|]Cf]/Ɑv;DuNg t8-_}Ww{\fmjG|pǿݢ;`gCRts+^9̼{orB{XH kdAtKvVeb3KT"jF`!gkrE'}H)TEy,鑡f`TqtK3rvb4PBx<=52۴[)b{.t6g>ҭ7H+F@stYzWӸF_eNpjTl/ €bA|3ܷ3˂!ղ دTIy1N??o.ir´~dY*cuCAAv&ukxQ!RZj޷? -cQ[3q?Zz@52b>_5MT[HBj CZD6mN>f^)TY{)1]7 kX/v6#g@'=9T$Vy|Yl%t:Sv)^yhNҺd,!E^M 9x9NN)1%|()߄Bֲؖ2IrLr;jj'f $f1U"*ϳq=zic4bT< JT9Kk; UeD/2,4o.gtggCc9kFΎr61[W]D N$WUDFRChL5AmEA*%V9']|b/-EKPOՅM15ЛX HsJz4Jsj5StO{;iN#3Q Ph$-SC :L[a 1RRU8^m<6TWkl甭C^%HT2% fLl<8VȽ/Gi;?]s/zor}qZk ZԀ]Q'Eh ՚*5H(~H% E1P<.fn!h,瓺0ZCِڸ E͑(3Eơ,`' 6`ilê1[PU@EV:OEt*g-(@iFߣW|q?Û54OǺw(FW2btU=srȘH/%\QJTϷ>+ 7U clT~y;[=9͜{~ulL,x[e~8źG=Zsi8tsޮ]Uͫ.orݪ& ] Ȥ&|ԅ _OR|'eP!P<ݞ ZO6// E~x?_Tûy~ӜxS8XG&#?[Kϻ/ji^o48Ҷ]/.뺖f[c1;LL~|wc~8ILtvSvLUp=-?d`Ni1Ōޤ{IUD3qz3đN'q?h]$oIqWn$%#D R-x0&X Q0F6,1onˆ_Um*X $ٻFr$Wzy006/X L8%s 499/Vk>h*=tZڀi7*ɺ,!zuYpjIfVYMY[ jlM )#ÀB5^pl.(ЋGvR~x A)EUZSC&]n Zmy.GWD퀦Ў>CT>ċH)PH$(y(s.h釬 U]3rCR茌֏:gSZ7 7wW)՛K ;lq+d{mD̀7h- 0N/V}B]c MnFDDdEf@ɠtr/r3'{7p oקyo~`ڤ@ 9`rEEa 鯈']R)B,$QDnŌOʹ%ݴum*f:<>0<̨]_HyL_[v*ή^tXߠ{u`L7V-ϠT/yEkjtR:#@:\芫U$7?Wbf>?4M~3]kmҥϟ'__FG'kbڷ2r|oy q]AQobN(O!Itlis*9L}Ѕ9$ UG5ATѽMlzmoP1tI?48yIqeR}׏i\e^_`ՋZ,wft!| L"dPH Cj? h5# zq?p.-O!^4h? 1iO"vEb92+1xERط,g,܂ş5cG G~-O8Nhy*)`^JY礥w+V_CD"H @.J8L9W M@T.M܅Po|ь&"#Us:xAH27%=,[qﻸi)&ngW6KG#3׫_-)Znҁ.d/Ak (pAcgh8HM98z : d/>_0 ,.JWrJQD:W3rsFxr}=+ۼY:k[.٦XԁnW􍰖aO,GӋ ̸1$DV&!%1MyRzczUUFoD}Ģ\*F] Ԯ~h׺5# Iwb?V56eu3eq?_~Ҳ^>bxS;gds`ӕdSB`'@g|Ϊ~DU|`S3|scI2ɬot6n> ,`RgtȐG<#ttZ>Џkq\3WT (e!n«0U77ey/TƦo%R峵Wsd>ꔄ " d]r FB|-;C# :eGu-_h.,h%b FZG .k],ObK^:_H O(` yS)ͅ'$D'kĜJ$YBCNQ@zo9CEOD}gKP[ǟa 䩻ͽͅMe]|A?o3&Ô\$+8ĕNFQ $a6!IS@E^%v..yH*uUt$ve6E'n:p,O$  .eP>/r³}D= 솸@jX D6K-xee] 6읋qB#J"*J)5m2]^A$PpECҳھrH2-w$\{8UNNgSZLҲ #iOd[*u-ZStOo3lxdxXkϯr:oe.DOg&ƼCtLtEuǃVcyO^;mD__>_.gnJb9$XoUIl*YJW8!A)(PA` b#1M68qznl6kpBH2c][KDoECČ1@3I 6 ]5zurΪLj3-ͼ$f!0<{Yuα]rwlKy=Q^'Fy{$崮. BpB`U==}d&߸vW[[{4]m44ʶ#!-BRҜݎë_5&"bT%5 6Ix E.z"H  [Mb?ik ML j+"!,\ rijbs9]9'0ŀ@-`-wUR,Jػ9f1ZBx<|]݌aR+|˰p0w{p[Ugg=X=]/]_}puJ? r'iJ ?Kx#3V5[nqX|hwn׻5zָEӪ絖C$!u' iǞt,͛fڨaoijַX.uƻ?tMo>LQh!u,n6\in?e/\gŗ^QLePeh Rr+cÎ"@Ǵl[eȲ,ӿ:2 GJN&FA1G!e4 " 6Y71ɋO*+D G"&q-"M'rFtI#[؎[Ȱ L##1*%H#M:. Qn7?b>() 8!j'J%K6ڔ&squϦZA[GpTiR{K`$4f1tN9N0j/dTAYkP}cФO"q#R44b_"zJJnC471(\ [FN1H1'%BħPt6{輨|6_~1&М,1qt.&`cFaJ[ >w;#Xo^Fmh ہ'Tn+{%:/^`!)b}dZ_J]t( uKP#ݦH{<{YGy֑]%ꏲr,A0訥@!]7N):PbeOL.sY-ӎ+Vݭҫ]d_@x|jסn٭nU {)ϓ?~( +s8A1T20S _QGGR =$}PsËW>7:kA%,\,Fd=S]Lu$'u='WYO?5ژf*?_/ɃJL>9@L.-<yzɭ^&"JHh $Q/u3X S (y6s.H55#g?0z( !!C9p6ys@{SW@n(x]JCA%:Ip9).-6@ "f,I-""+2K'Ag/!^ۊQ-GJKopv5ur98Zts=$-2mz=0c+wHj}ZN1_Yo?ɧ8Z}HK0-Z e-pzuQ)\7*һ4*UfԮ/bj{ۺ_eq=6.zۢhv_602Ȓ+Nv"َM;rMl9g XNqq<?Ǔ^Uu=[H>!7>o˼ ;?=P0j2zPWu 2^[Cq*Q/DS㐎`뫗J}O]<)u48;d +) 0"KJJzP[K(D)JiA(C|$C(A䋓Kŵ]tSZEjI+MT@)A+[BV"[3qxnCN?.ix?B|Ǹ \C##S+{Oz6Yyxp3)%,AJg BsJEJAYD65 JS hk?d |+>)_(H2B`*I֩QZsY_O'Ӳڳo;P1o+v,| y<'GIo3m-nj Sȝ:uکը.`fj*2ǝ{.ER LŹd 1JcD'1&!5JJn87𧦟w 'ecG1Ŵ }"_ޟӅ2oC ][w|:#6$ehhn>:ygIom|-OdT 7?\<4E7mX02GR̹@RNB$ 1WÕY6EF3&$3b+TkqUT([i!Hit( R8wgь4fq_,ԍvXXYe2/;g]"0g}oh40>}M'FSI3AcvD,m—Tx`'3)͔.RA_kbfpmX셬񃪰 %[ddNxdkjv[ouvĎqbn j7ӎif;{%Oq+vggMKH1jk"d2 f/:|*HVi82F!bHT s;N2'OmAfq_Dq7xb卼aPlX,V;oJiQ!FF )lJmmt/NoLmuQ2dJkN*usnGCg¸j}c\t;\G̪Dd^{d}WǠZ%oH$H;\|\ZJ9Et訃ṵz18wtb ݦzĦI=b_6RKe$t4TaNU\;8%VdXe ( )Z6B.2VRQ^J)/rYnkYIADTxT&Z0V bAv#:6F!*6= vD (B";V<9,B1^9@vL;YoW ̬WMʗ–bpe"2<GcRZh03w&ο_p ]ξ"٘OCKFfrN 6e?"Ġcp6 YC!+cjW<*8njNN&NsPɶۆJ>΀2MGDɨ}B`eR-CYd3|VZflJ]5g}yMr&r.d=;uJ#މ;t@B AxF--VPЬY%a! A1PMl\7֋Q˕mÃE4>Ki߭o)Ї~ca=] NB$Zx^D/__VX#S䵴Shךgx;q=pJ'M"X`~}>i{Z5-Ҵm֋K+oh\qj ৳?T],#6?~9cG<8+,6|_h8#zT3Jt{]݈pd37 ?ӚuQ'7Gי2'I>n;;I F"&W ,(Zxk=XBWPyG%s\9IOڰļ-<%Dm?dxprP d` ¤,U>*x*JBh%PXHv:M6uB+J+gkهV~5r څ[~pK%\^qwfct /]Qw3tڐ쐴jm Jᮠ7Zt"JIK\|QYϪpE0'̎S,1C@rDEŐ#̕&y/e1 IW(2EBe zVHV(x 鳰I 8w1?qOJ[ݷr>Fø!u򾛗JNv"GUH}%GsG=]p%zAԡg+XrLjֳrlIz.o_V6wJfO%xU2*`Cϑa Ϩ<fZ`Q )00)g,"(e8AY]dє I5f #gCZW3fnQO뱃17KWE7|ϻeWˋKȵ{0TR9u*u*xb=*JlD-I=>X Wq]3\V>%"xbAsD frIzUk=qf-\' 3[ Y4%bл;|O^i )N^}].gӊ(*wC5lhZ$HKtDR>MA(bNn9g\KQ E<2HrmS%XbQD5h18N :g*DPbaR*{)zIƑӎϊʯ nNl'Q[p}qm\_k͵{fSaͮu o~M}ϥuZB $U6H>ey@[WMӛy.Gkvѝ!y0Vozuu;7!m9r576~iֵޱ{V zE6ԜYl?^#|tw`:PӫZ߹_M= >Ӡ!MAB`ۇu`gv<z<Jp~דy蹦,v*Wmɦ9xyV0;q.9wi7q?wDE;tsN)0NyhC`: ,tWc,Q]in L *A)Z`b1<>Ŧ)vTCk9ښSu[L%f|}N^k_G/FX5J1?]7 rM '`yx}oӹ*8ˣ|wޟUFZq9C+5"ʭx 9v3y(o&;FI ZUD`GCR U۾|(*A@SGWY`s<Hwvm[+ J~Dpk~<,w UVUr\+4׭xᨄ/ ܑ'c;> qL+PGY\r4YZ],=~==˜xyt+Խp䅵e/n'%{ hW=!R8"+u4pX e`WYJJzzpEB\R\ ]eiϮpQԭIDUc,<\e)  W(WY\u4 Jws[@Wo4] "E;S!9HjXڵ{ *|X̏^Fϝ.{#X,0u+$, \ՠcaYZF&ssgoMH9i9 |Rpp,p ],|J \)N&GW(dh*{PgUvIWYJ-zzp GW(0h*{PgUVC d3zpdrtE"+k;ikOrOˈ_lY׏K~) "Dsj1./ NqϘ*v/"NM,5 [Ugjb(n'dO$p.$ R+錊F| {iN*J1\m-9]#[e=~1]:e&XO'ۻ_ a}{cg3֋d.jl+j=Q5řԥ'R(W+r}E\_Q(wtˬ[P=a.e\_Q(W+r}E\_Q(W+rn~p;&,&G5X|CPZB;(d}Rb48uT ϖ|^5sFCQZ^T$UJ43L*˖i7Q!:dBFl4^*8>шKgX9 BTPN$Em 𠘢Nzc 8| BZ A:A %mYoܲY )R3fI4 4Ŗ(˛~ =__W8k5 M-O2wFy"gM)kLh*"x{\W8j9m]8m!8U>< r½&Uӯq8*{kMNJӉel+W0%O/؆@vI{ "B*n,I_ qq X"r*cd墶$(ƽ5Lw<1xd{o'^]g-)=MIy4pJ$M Jq~.yjƜZ"q2 ]LbIb.*N' t`YKZ#aZ]a@1q3%;(AVL=EɤMXY-@|M+K0i7|s*vǎ8syJA9{nWW$8C|4H$GѲ1j+I*)zrԓ%G* J2IX Ɍ1G`β$3EQrDOL 3j J1* 'Km$!ʑl>Q1q֓#q/Rtu]8FݐmLacb_i&Yoa6)* $:MW<tjB9ZȘxx,ޖ}Zi8{z7YcP ]r෽~B Vy vr2I):5mz݇ 15Cw1܌[շ_'E12ͫHR p!c^z˞a`srȇG 8, 5I!$&/vg 9-C-mi<|>;iWw/V\-Fٞ'1zBdDjҐxT>U6oC+|SygED}>u[GSrTYtFڨSJހ.1R^$b )΂p({~ʄ@8dzfl [R$NdIr.<Os52DS*9>)&z]- kHT33e*3/{yC8") QLe=ga=NOט'Up{ 7Bn,T.H^!L籑wM{tu) Im`)@:^HW6rb} @vh'<_ë͇]  D+9*@IJ 8^hcs\jج픯<Ma\.xk,;'hJ tcY_ 'SsԒ6 $+xX;0ڇ@X/UY-n~|Gk5+e5fʔ j>b- "&Un x½(9 y.D\=#QM eFu"ޙ!&(/Gi;CBb,P+|_s]@x $FE`$Pz70%8Fe$%OK HJ>hcmbPNг&oMqI6W19^~AVq(!yM(܊&8_ L9a-DjE-{R' @^\LTNjlnW} 6۸S6wUGoRں)_57#xI@! h$Uz!xSMeFdDheSIC# AUz%F#0pCQ[>ұ25ؚw̾gYb=~!4n҂6 4}Pl88ʃL[ѻ6bgw$8^09kr2h~Y.V< msCX!5Qp(;%Nw_yl*o),km l2-a}vsW.DDB{5>F(!Kl|6 d^ikmM"ayxf\Hy0M&J29)sԺHSvi%A*+H}Խ]ę|0ky"FLt;N-$툰T1ɤ>-YTޅڜܯ˟ϩu}%hfVAh6VVz<~_?VGnq5|ZC\m{hƓq">U7jrMAc h9^'Q5M: >=| L BFw$!7t7 a2GYO8y.>N.V\9>/n{9zGeQW/7j߻ Cˮ_G}_ ?m?楆VCs#Z|n·ה|qE%8V8Ttin܄&buڞsYQU\}F1?YV7Uh&Uw5U$bا|Ӭq_7h*'}>{1uMbtyVDƍ:rm}-9$QJG N)2 VK稥 z]TPM&8/b2GȬ1;j 5N8] T}PCzTGD87BB"~gp7Tk $eI43_ѷ[& R,ҭ/^7șSuVu|J U{6oL^n篠Ap'`dG3ϯfW3ݙe oڿ߾YFf&o|pdzS1Ə~q+i'MeyINN:Z,GDG]X;~aY]{+k˨XZ()h˘ !Jf#*p0IdSI=#`yI@.-L%DOŜ` j"Dsg20h]zbq-w||k/[w4C; vgenjjY-h31#6Flrje{!%Im !QJk%P0"`Lh9Ru΀gG#/LGfL U t֩b-W l"`^|=}xK.=٦X"+,Kk(oYΧvyvIQEl䵐@k #oՎdlGە /2ՋveκBcmO>rgBfrļA)*1$*"y`nnn7W_,#VnOǭEj76})AwZ׳ÍrpI 9E'ܳ4H"ڝ:Jb(4@DxDȼ{[!f+'!9/TS.$a[-36v htʓ)aěUQhH$ZYj֜INΔt #YT ut~1r~@Ogzޠ&K&mlq{7Y)§u7-/5=yy37WI>Z/y3ߗ[]nf\bÙLaf0eX c\UcmưQc-ưs栽6f A6 ĈT q2׸8e+ ·q_GZlE׹UKܭ tˉľ~{Toٮ GW2 KTTd*iNG\4 8ݿIgkiځN=v5n:J{f>y7]1wm2A]'&)Lǹ*hufu;:6q;z̻_dӴ]d{s.nhev]ܛwXx;Ƈ7m gJ9:gt!:S*DЬd4)Ꙣ4:ָΔʸ% Q/YRoxv6NšrusHhE s.?{׶YeŪ eǭ$^gKKŵಫ}Ng|lc7%*رZqb jEgjni%zr Y@o 9X,7yhsyǪl(6wo%l'c6ڧZO~1Ԓ xmPL4!V Q/&S%ϻNz@w0?iHdޛC 8\7@>=C 8Jx|e Q{W}+s+2(WRqWW+/pZኣtQ%kz`w-pbjwJ[0z.ZكP7rpvz~kCo`7i=)fVdN~t_?xN|Oۅ'Sz!`σӓ9co_>j(;f6k4|pIe{>98h{~s*6y'< oz=蔷WOHޠ]~~zܯz9ri=TSthTriz~ov}B>bܶFtMcT\IfgJ%SBOHߠݧG8anhotQ&}>Xٯ?r: \D]ӓ*Mjo*rrwe FY/A]t[YYGb C.v{W2 vtfw~}o26}:*uGt~qz_0x'?;F߭lHs7.>ףgnphM1~蝽qL;c~̇n=]:5GHݶyrTG쟝MgGy z?[fL_7VY)Ez.F^ͿG kָ8Ѷt3E[ŇZwǬ}^q|~ ^#pY{tYog{t¥Ⱥ*-QTIYB/Ng,i"\Λzc+! حj.ԹZrS+ fjM\{;:[;cHboRDKFzϽ#CMTDzo%[Z`$MԵA3Z Ct%'?-6ڔ,D~˥1pJk:B0)%?uFJH-tLwafpO[0:TӨ%gC hdDzBo3M- m*F)]hll۸ʒ&ӐdK* H !Kh*gs#QG oT#FSHY O^WTG|Cxk*6YtT:dyJP[ OBr>=7gC<A޴\s9 Sf0j 7.(]cpO^wF$.qVm&HKgBrK|@5 F5K!RAXoݸT`1vѺXm6j!G]чdP둇 -U0ss@J"s VޢE\aPB B!Ghե=0 L'mTф j\ёO\2X E)eF3`Qn<+p-a}qDg()M&JJo,|Ţ F -#12,&⊲.̸еܸħ(Ȍ,Hw"``Tyles֣6JQgsE F@n70(ʡ7fAydS\^ba` X5lGd& VNȤdC i<GtZ2%ѷmvYFoߪv%4/j3tW4Gf Af뭋@CdDPBfEiCwo1׳2Ɏ, DtusYpl) m\cFݵ+C|e-eVn6fQFh햡@ƴ[ =1?.4#lRӑp0)!A/Q{úSiCكћ$pᚽr6=wG^&p%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J Wboj"=x W71\g~(Gp" WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J W <[Jq W Z W^ (&1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%p}O+`Jcb%z+Zp W*J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\-k̷ޜ^nw= /ݵv7TN{mbֳG0 70wh\3{ CCz;ǧoy ztrt= ċ_^|x2];syj#@4Rq]OlImhë]Oq9?Ul\5'?Կ?כ3m|\C?n<+Wi6*W^3^"_d2z 8s| *׳V&:FO\S+~&"/Iie=i5n[tK'LW} cx~WS-Ǐ&ڗzV{{(zm2Яk?cpFl\%zwy?ɑ'4lZOEeξ19^f$V^^>ܽ&Xv0bm9[yosM,;ٗL7/Rc3=8qZ;lR,)]|7 9Ʌ055>n }o/έrz&HfPe@Pky|g~P펂 Hu_! 'VEl4>2n2YC ǢZ"#gt `&Wu_=P>[[w4g0'd0< Vɭ og6l/)'W:(Ahf ~`(9orvmHq9klwf24Ā%("{ljM"AjIu̠Ù$E' "c1r6#c9[]PBcb"|ZƇI.D`rUfw7Xjpe`xvɘh`!5n^ XTRN&zs\lRD"67< Xt\dsɕXHD,1<_ 2&m #v1r6#⒋/];vEmYeڽ{KI fLHL2ppg:L 4K(f4 o$`RGrxXL$(L#`||kUs9uǗi2Ą/X?vEDUU="&GQiXdJyFXzϔS<<az@ JT("(S?&xpgµWR\+.¸{\q>(;'h tc%+ |=.> . ]C>-V՝juh??qs%]{M_Erņ":,N,4$Nsz ͳ,8~Y(ru(OP02֊Xlma <}Kzve}.jwFv5;%:!xpb2JTF lDO&i3S$q+QH*3`8JdsNXE\86Yxӟ0s9`S2OXu !(%9ԛZL0OzZmȴ&1޵4{'|` J(.2)1,{(M^MzY)qېѲۆ>loM!)9 yN\Z ˔ՙxl1A2 FDI.kx{sF*|ky-CL؍֋IY%<8\i` pR, ){ͮЫ[P<^f.!J/,[ ^4XOƁ[z 2Q?5$0e'9i|.b/$ sOݒ^ T[=I]*RO6\@i^ ֜LC2RF \HGȘv̒zzoF$0=$ ho̸")Ø7A`2XÔ9j]CgZ^ .OɢĎV} 5R)!%]SK>A;",U]˻O&,L]ѼUM"pH ^ŷ -EVaqCoS76?j»F!.⚚C+1Y ">VIaw h@Cwsrp%)iҙ^4S4Hr8+ $.: ۵fiɍWnpܞ#Q۽Rqb`>O:w-wx-N?"JR# lqL7c r,k~=qe\&fTוEn<Ckbz:a+0-1g('ݺr{KAj4t >:y Xa r&!Wtn~cN(@`c[|UD-> \98L{8rݬ ] H]\>p'hR5B!=ո; wA߻r(:~?9Ǐ;~Çc~{8J` >_F}w۟ZRS|LJ>lה|5E5 ,m+@~(~?hɺo;?t1 @Wq k2h0/aTBaq#Yz۠݀`vu1p'nm{ zDIwȹAcZr)%Y0XU:[fKT1 sc7sD_I^Nt }&W6 2g* 68GvDsNM-/$wgt1ki]/=M\K2W)jj7uw.#yZ )j,GfU͌pyV be2>+nS.=΃yGsBw` %ІJy-5IY`2Ab Q8{0Ku), ,"+F6#zj1!35\Ya9(GfX>ҽ^7% WWٕUg_Zl͞J7}mhO, 4Ϻ,V%܍"z:9Ɛk'Z:ÜL '{c7CR6YҒYRͦ]Φԃh %6h>i୍c!TG k/ K\,Ro_OSX)b[@< |FC?6rPU7 |v@cݫGFϣ5}Ƀ6LAn0a{pa~\dX| b 4?ZkyATJT(MA.Z@R@!r*,\:K:w3 >hw p'b#\[%[or킗pE^,w0\:lavuT}~u=fW޷_j5WL/&#zUA*UptEà _L]}9_3ϯgw|9l&1u_Ph .$P bmEuNF4v>ص.=*O*L9XbĬu2} ˙;G=5%RfE2B]6UOW𖁭׻G?G `M^VwtIc &GuB&Z$HzCcjl}668m.?5Ge=gk( ~GlҖ3ft|`' Ɛ䁹~'1v[pc8 ݡ}){Գvmyw}\ǥ֧z;ƀ(cDOP.SfGgi#Ib#b zDxD@^޿BQ앓ӐsL ؇ xkefNy=%Lۢx 1 242bfZs&A89S1;dQ)R e' p #}$\Le|8ݮ^9 z;Ai_O\gP538dS'jJsmX SH1@@GeE*IJxx2==y^H fQ?{ƑB㐳sXc`gHd+߷zҐҐ9$,qՏUM=ht`0J@@Yfm<&BoJU G`RX-&A{uN9S5""өTƩT9n~qk.[{k*KYjrkFs|EJA'ehmr}p},Qpד, 0d:-CXf4d9a*_hߡ8\ g}4%澘~ppQEVME`ѤbaQ-O)*/K6 6܋x1i YKYXHo2PQzy _7/&pzaM}yߵ"ۯV#6"rj4u3:|*!6Ħ~ ٩_޴ XmBT.[+ETc)=]*?;en]Ƨȥb?/W0_+_4Ief<\ʓ>MH2Zsu]ޡx6iuV&H L3;W lxG ͘1]☨yERWGn@z\wӮ颦<'"ڰ֐m!UWi^^x6IҿJy%=uH@aL uF.+mΰT;yǼpiUT _v. S5sFc" e1B0##*BNY*qv6U j.-˪.1KR ?~G)X$yp.ukZEV0#(6cFLDRֱ>eTx t h?F_Ocm=Ǯ/ܗhvb5mleʀaQP^QV$\ 4Duv?PTK#`3,:XLjAEW;S:ehI\(wp)Bpc)Z1b"iZ+IV;X{09AtLR8?ӻ2lyW]luh C#A=ͅj]݌EOf?0ZGe&#ÁgF! 4QH}fľRF.(G?xo1>! !GWz89u0RI<a^fV0ň iVT{ڧfK:9uHE-i߳fVʶ@Ah'i ^8RC'H &ryAߌ"8n%c7 #$ F`Ӗiq9 nឡZU! N(|U" V8. åhnL!8@qQ2xo"XZ[U~0u+1EBS$:*aA7HSOI=ufHԮ"kW*&+A9H@,EN-7/]~O{acH6A > Bg+~~1H_8=+l{۳fMHi;\Mfw;, ;?OI9nfa7r!wVR[9FU 9eQ,O gY~!.so}MaI<43|.?=šeEc^s Xeg~p 粡vz3(s1~TYsjjys}3 àxyf4Hűv}xy>8lr ry6(ÿ[\zނH*wAƕvױ]ix`,f(1xf2 g 8-pVY&KG]g J#1JBLF=D8V[,|.X0(Jb*[5b-VlPiI u=4:4 Hbʆ(ȱ䗁$qx„z (4lr!9c2Ѿja^Y@ Zo߀NhM.2A,^3Pl8}eZPe$U2 G)@, L3O긷We`:`N;::`1 ѠDS+8 )F,q=t*Q)u/^R\kl `RG|3xWS~sŷ!(ņP_ C෿a0\M=&h0l2Ors0ϵ|kpo|(?{;vop0Mx3p@ K@u0]mm>'#x P{a )$bt U*Jnx,km4o޿ݨT tJO"H z*h"QpDRkDbOɴ,(ATJ@$De*ŕSBW@p=MjU"ҵ%q*QYKы#ULiJ r**Q:& W_҈˚28M2& ~׳%<41g{;o5NB(!%R~LA|v?xcqr;Ul}>}gH+tB_3t<\JNwT{b?p9b) Y+u}"W y\qj9GWϢ9g+ދm yB VRr5""DeL{quq'XFp]{WotXٰȌC&R\G>nOƸf؏vaN<]i(Iaqތ ~>?_AMOV8S[x]Υ;Ma,Ρ\I tc1| }>>CǰK>| 2kn&WWq%Z o =E?^_'tKJ̏C!xwM͠:'οJkBc~0 *qgHWgt# -D&Zڂop21lt7Ԇ1^^ `j>)#M,Z]Ɯ+=p []~yfge-={k/;nYAL=u&|^܌sud75e2 ,n[2Ǻ88Iջ-oOv_`mا#zT9F$㑅H>HԔ1тFQ4`pHnO=<bAjJJ3;lIݍHjNOx)o,'~ |fC]6>&Jn4.BFbSzBy@0=@rݶ|\FگWåsyVpWZy4!E/Sbň ihbOn;9!Ԓ䲕σ@Ah'i `.R&IaB:"[1w&j>E8#$ tI-yA)ıH1WOp>W~: ]Mv 61_$JOT,n]d\D.?ȸӢ.:`\X̴Q*c e3PWﵶ`c+3l3Ng~4qnsT!&qcO ;N)  6R+Vl[cR;ĬW逰 ߜ <lbͩ5ۈ0Q`(qUp7#L%sKE-zVe_][o9+yYÜv^63ؗ /&d2ؒ|e) eIHb[M6Ybj}XgOmmcʋM`sp@ \'""Q׸ "$თ{csу6QޥaM[.|{<`yFb^gօ KdjNLmޫNANlq`5_jSԬF= RJy!WE-L҉ RV!Xk'n,Ѡ0RU&D!'Q38Ģ-M6m"X9> bY'Tim5f> v־ң/X2/6JLkk|3w 2Y虼/Lu3}h_~c5c_^LZ342u2tZԡ3C0A3C_a ),_c!Ky.ǼJ쒄%u{XE^B6mwϗgiX,cK*鵉r}lrG7!TNRGgZx9Wď ct46rs~17NO?NoO-:vWξ,:p9n~Vb6P;>Gz.-6$0 &[a({!wSΣnO8$y?򶜪!`$OΝu_Mz1]`4,f n:=S構{J;Yy@a p^JOlD,AJg=iichcRdrVuj))-b!+|FX(K*>/9HL*R(B|Y=;xv\sn:\S鄡=҂erޓG.mxRe糕hI+)*o rZ_20'R,9Jff( 1W^$"-B$ Ċ9dY$H!ײn:*Hit( RYw(Vi$cW,ԍP XW,Q}Uפ\\#6}eà x6$`Ť@c ٔ _dIf&]4!( zkHʵElUp &{!P *l*뢭Yf2b-m[bMK|Vٌt>M(桠v3Mc6jcoDe),ZIfl^1sɚ Q oXǏFz fUA"ZMˆ#c"P*&Ȥ::Yw6aOW'+0 "6]6FD;  *^6{+ʱ̃,V;oJTZg- B chiٰaN43JJ6[T{TpRG#blQ4T[*A:[IɮpqG0WM' ARj{=C5m*ֺWl%17Y *"\<bqpss<ښ?YWSU-bu6QJxH&v:u fm P;>՜djMNNQkJRzfi+BEb- EYb6X^&EZ ު(Q*‘;1P3F;UFʞPp5Jh:ol m֝-C(LǼK^{pvt;^S6q|4X/.4Quɹ6FdLRf.ikѩYa9)'? $,$kYmKv)dce.I0/ˠ4whAώBBY DxC'9 2't(-QUOQr,Yz]ߝgP(˰P׼5;+P5ֳfRzP^POT b+>HjRl8gKP[Q\*%DCuİˋ *R>|+-p忺/&gPɘCC%j`Egђ1F΀,`j53|rHa#3 @9GT4%/"iEc4kwo 8fy~uq0edbmCm A~ f&1[2 'U2pQf^ft,'ffA'BhE| ="ܷW>3JDdWk|^V5P-jnB(٢$_!$c`Pp1jbǵ1vPIȮڡkW0Y÷t-*&Dx<7$j* a6y`T[8/~l-xӐSH![DCJ5vx׻/]w˒X}JnP$Ob1 %^F,R9:L1bUS-☒HU*)65`; Ҏ-=Q3+ ̞6e` :%f}Hc͛ug`r~Hfn7OL˝'+^|,F<`2Wja:ROsTB7$-Y_tqr5ɬll*Q;NW:Q1eȁ![e8m7g?}>ł,;29,e}v⹍P̆v$S _y=UM*)k$g}AڟjiF0@h !HgG磷_sB,u"l{bDuc ;SjzjUY;ĐTP% 4J a'yGYY(ճQuvbW??>޹W_Y1Oy|Z ʪW59qmi|GKVkz!I?Z#O~;G&kVs=Lw]g*,l xv'-VYJD! )䚻ԸX?]B}w'q|ןR})y5Sw>]H;iy=zu{9k[x\wndV~Ԇ /c},˨BƱft<`m6)EMŁ$=ayxflxrR=2db&`=^"@ Y,(aSɦao$wtkv ll!7Ζ?>]Kc25p7X/w\eҗI[_JΝtIwMJE׭?ìHb BDpLjÒRQza@ BKs 0Z 5t,C `Sb=͉ >!x,lh%H po1%rI&]޳$W}fha 8$`w>6h#K$;_5Iɒ-JDŲCxfwuwUuUu=IY s$#-3ێMM x,ë)HS3gn}oI^ͫ6LjxU3U>/3}K@RCbLP}jA[`:"2PrcTv3>Բ<5U$Bhj4 ;sͭTDD$tMt!ٯa蓃^ᓷySL$ϗ*zbtew9>QJQµ[If2 c`6U Ɵ >dvmn4>dtMӓϪ cӸht i&tBɦѤː\PF NwK5 xFݕvXF%$CGpIak/ Q^E,%cKM.FB# rW︾޼ Za%_32(@V]Yqd~{r;+n{?WLIPs.h)8![GUH R.bY~_»$U,_FVv~6\5r2 ^}2uev$l̮G誟8V},a~9‰wo @?פ/ &uD9hkcY#.sš Mݩ"Λҽ"]Bf e`^N І>5*Cz_jRz\TkXek:PN[䈸} G YJ e_?a6vI L݇B;ɲկa .K궾ܛa`_wgwM+_MbKx1oSvuQe0.~-kB!?l;CŴ@`AKYEQ^ʣ▱@gqނc`bҹVpb#VI]OJ꺋X^jgJA2$Up;oB!rc)ސZ@1b"T>NjGֺqO@̪ϗR&})ɲ2O>_y,]lˢ]Jtw.D%=W>WE βK-溨P·g!'{˟hV6)FgR"zys$or^ӿ8kAvY̅q$g8`r'&9L'V1Zm wysJǝRF.(O<1a$؁r 2"9mիG`B<`Va S띱VcNa%^ ˿s"\Uh8=.";MG@ȹ`]̱|6=μ4q>*gCL zXԟ6?g"Ce;UU=Q߾6/|o^hto. pmse9v®3|N)bxЄBB>?u~?!UGîe";(SՏîQEGîKθGήJ; EJIcR`LOs®R;JPjұ]iݨ"w,iFٻ 9Y~˵Gοz#* L"K (ƩBRx쌞V޼23o(2>UGWRN) RLvGl9rveUi.B ޫ``f2xP֝Jp2륓^E$ leX>e1 _s1IΠG#1AҒM@hXYfyZVʗ ؃yrd&xYp^3&3,ni?QE/,L9:0S4 B,ҧ1Y)y,ENH+.Pi9T>X8@*h&ݡ4A/[R}۷nPZ\g-hWJ%K7IJZSrԷ`߂$ {b. v.qgo5@rPZWD1Ak 3Y徜;)ͻqۂ|0:zք^ ]ίGAH].#uy.NO˧tt|:]>.NO˧ttL7RiB^`M٣ yI2XB^$(e yQun=U555j1E/s+cDTQM4xHC9rCAGgCGdC-~; B;Icy"L# pZ`B:"[1w&\Aa䐤X€x4vCP q`9-RLp7ܲm5pr Gv> zُx,^~dL&jfClPӫE7 mJc1F1sI&Am;Gv2\Mpڙƹ;t*$,HlEsFQj%S٪U` v6C:&;Ԇd)r`N&U1ۼ>9e^5bv˱c'mᤣ&NҺ&7FaG +ʭs7@Ób m%=g+Aaex (Rάsy|aƧMc原WxͫEh\‥WP{9l\m o*oRs/He ,;VQg;@}쿂eS/yr&9';RL.)񦘯R[e"9㢏6-:q+x+F-pO G~/ fB&nQE?zw%wI?Lq W%<KX  po/?q|(ׄͲgr"%AM3,Oץ;5ޏG/[:vnqE8dPʑ=ώ\Tm{C HWb " F1g5Dœ6!w{+|ؾ'#eq-oEM9]4m\"g4`+$T΂#"ᯠ̅aͅ`5hӈ@QSOG[9z6 %j.ĒE*L<)&SDcK,}G4Ml.l Bt̄ëU8V63`ηBVq4V)oޖoυƎ9#z Q0"ǒi2 fA"Do+[3M!םG&[!:fL+R|Ye5&3/_UFR%qTr 4ǚ{k;U@)ؑ\GŎ(ҋE%E!OEͥ&ZDH1b1X9‚JD%N꠵ :=V)n[EĖP)1p0 ^ز8 BJ|L"MP]2|Vs-1S2!gmʕY%8=^.HIGfU^N^} yU`&iꮥd |dw@N@ݳH}JD@$ZH0< BnYm qcgH20,r#uwSxX G5E$pUdRQ-i$_U_}]]|lƧ>V;goda›l=ߝ⮥U-k~?ul̫CO .N(cGUvQ¶Et.j?e>jJH`u5&r+|H:zT; }jY"ّFD1x$W%{;Zق@ DZGs2ؔ*;KlA 3-T0t!(yL,[D]\guq%Z_aakjݹBKoAi\ܻ%C7پ{I/F.=|$iYqoa+99zx%Jήo]oIuu} vņf3iSdFTCl"6$-['_ZdsG;u^pEfNZ Ӂ8,}bҺK0z-],dD/Xޥ-wwһwqwٶYI|!Hܚd3>k 9;$uY%:[j8u3If`NÎeQe E^JE@Jb(VrT 0kg՝ߌbEtZ0⹇P5J3 זb*X30ϧm/fYTEX hm3S<$1*Cx,EeJiL6\(xUd!C[Hi!ruA$YxqYNOZ J"hN%N#RɌB{И-W i(ΞO)?M%?C2Qɷc֑J]KBck͈NGiԮ&ۢj{{+F292e@0x <6a2 KQfCZ<*f *NꥈrI^t &IȢHD >T2Vx9c[DԕQqY$/Eˬf8/H 4V;p)$¹@HM`9cHQr9FH( gB JԨ|Ҁ7\ƒ۳vZKi⤝+Nu֒mqVE=.^[;hzK4:w蛠#lJG DQDM  :{]jұ-x{eT؆:'SrLfq4d;T. L2/A୆U).3DmcL+-uN@ǘs k%Q7CuGa.UWU}dxqoBPkIp:SሻŕB=6BH!O}\`5mF<*1 ^B^si"җ{=;DL{FBy!8OSAiNʑsq\A'Svy=B;I[Y Y ',h$n%zV;=ӫ߿5uJ?.QU 8<قD IU򖈣R^ےLlRH -`P. {})?8_ Q@R?yico 9JZ`c) 3JTՊ.IvJH5| xL"HBI(Uf:[`daUc /%[ ^PSp3F TrʙL, 7 yfdMA3ffˬ#B};c[Bj"U)5{d%`7 ,v<(Nz+|R 9)NdA3> "`b+ǒmjv@e퀯q|R k8H=.br>HDDCY$M)DX\a-^RJf-W[)0.e֊X>Z5e5bb3ѼLQgzrtc|Kl6uqߓW⽋y&KA0||8 '$"1,0 bYCSJ!Cv8C`L e !!XihY%yE'P Ƞd/YRXjl)维,79ۅH CcAHr}H\6z!;!-|"1t( 5"eš+Jt|Qth(Q+,K/uREg 9IP*`9Nt T)#%]ԟ>G!39G4DCq& G >HV`j*ٖsehWާ%bޖ4`dZ&hR!-op[Í*5{+JrQ__4~h5b3*OV7@p f7|0L `Usř,_3~ђ3P,&8?I~xglMgdbٙa0ټוMft:鼀3qdlSK`6$N65#67cfV)liO4e? ?.&zrd0npNolU[] rSZ. ](Ȥ\vŅ~ϟODZ|in dޜngO_5/$/g/f7qpvL AϾy^?}5?={⇧dqb$p ߹ ?߭<j5M-+4kz>}ڵ5OG3uĝ3;Ks)OG$4YGO9p *p=JMZ udVe0i]|W,W$ubūr3Yfn@e"{=$nckLbtqVʝA:l_+`s%ZK뙳љ5lP^kT)EĽл k7 D_EE(%)TL\xmLQtчedNC݊%_H6G;}ҕA5j8K eI# 쒜:^7x2D KIFtj[؉L3[ s2VCmy![L3,oItr4 o GLD.rWɠS㲢ż]IJطs!Yxl56uұqWvE&uz?ms尳 !]?ݴW{MDz?iiܻC16 $F:d^ҫvC _(yvUQ0x6@h(  N{5~{9vQ$xDvuT]VW$?Ζ[{2O]}d+=OwApҜ:0-rVWyE'Zi^98sCN\ דo:Oߏ8"'̥|,D6G'_3tbސy&K)&dƺ A_{ clJUk/S^>{.H~ $q$q@~+Ӵ)939RvC두:I~HNЖ]0Juciu7Nfeg=n|z:mNP\?'Oy hiJŧ䜷7TN!4p>.n-7X lQZW]+UXԠvK>upV:>({ q/Q7Ԙ]GW|O{_RUDHu!aR;\܋VvhD}5^/=^>d|&zlO7޾g]o{. ӧ/\uOTpiI?i=UGO ^FZ4@B|-o|&8}>M>N8>\{3{\`ϫt=ﶭ\EmS;l݌%,XzJe X6U(:ef´cM+?OONGmx3mRL Yٰ Cm!2ǸYir3IkӶ{͵U_oqa1Qp5NPaRYt#"a:f-f Y#;b%vȔ$=8HlçbUqPBX\M7eTdV>i):>RƢ- eF2xbG,"4tk&Ζ7fSWDž?vW"C#t:^yO{vft܃V8y R(ଗVEmF@)TSS`s! `V}&P"y'KFpNJBVD٬o@mYuկ߳s?lS IWeLW|,Wu0(f5/l^L*b GWGCZkGrUx(u;]?J`//sVwopklj OyW&4i7$_^YX$T <ܘدKG/臅]A]̌1c\.޵V6/o ˽a.;Ngy\/'owO'C/%oήNN/jY:苫l:%w0WK=z~4=gq/ńgjv#j;Z0?{W|#㴻8gO %퇓yh 9HudB֢SgR^붗y\ϧ@z_ej o6RgX") مz;WdDuTZ0{AV$|onga~Ya[[w@|_}Ӽٵaytѹ|mC!xC!Ƣk$Z#B((TI:ψDj΀#IDsy1  0hMNhhFSSM.bفBE҅xSMD"8^䜒SIkPE Δ,Sk7fli8"YEK6㜙|z]nz뛩ھZ".y3ՈSoxȁ`BB)e&HQaSGۤ(2آU5Dlu .=+gDo yQf^yajtS2w X ObBFEOuY y*]tV=Gqc38~f^lI  uZb8!DT\mL1&#AQiA&;<.XġMB})jmٝr:_ d#ī/*}t._~0^whe3 oCVC 2` qa|%?Z[-R Zl .%O2h@~-x8?lL~\M#}G:-?g>2klfp 4Y.g95wpy~q1d<[վ-‹yrƯґ4]OK{Rf'ӓwWE_*Ż!|)sp~\v}=dj`-ޅ[e{}6}>}/o};m:"*H 3{W=D۾5Ao֢xUCNo%Xx_f-vL\ծj׬۵HUQi2=Sv@Gkpe~OiQ b^0nhaݪu6ohn=rYXZTܳHiz2ӊZ]nϻFc&[ABV :T#<ۺևt[WP:(H7wVdS%zS"3S'2@GNhSjuK@LDN% Yt l!@{a pF5LZGj%fήjvAAI؝ѯ&~R0]kQܒX."ZtVNKiTzj%Ke1(z[{bƜKncRؘl/9YW)i}H53 a$PJ3qj95p4IrJGa x6>$x^QuF'-ld;5Cn 2rz#l1/xuQF/]ݑ`0Ix]a3', `6*aI^_Oau,U fh^D$nGAђD.UP8A':)):v@:6˩n*u0BWnb[upfиLہ2z Kq!uƜ\puzi-LǺ%hM LqNkWxt^ҽ:D]՟d).%vL$XYL2bGꌀ^ EP=8[{]D ƽ֊O?Y \W13roEܯǑ=;R*7vt?iqLJ؀bb|L2.* h$GK&h"pVYPE+JP[ JeC$% oQ FLVD8D;05 yϹ֩ CBNRW=sѐChÇ䃶cy@, S"bj1RJ "X\2%HT1jr7V*Q@tCe0Dw ZfT"U>}*2FKgP5m&fbAC:pyMJ2wמ&'#mqF (OO#wH1bz§nuX*dE ŒQbN kqnx~X/5+PW3c_=w C`T.072P /y=6,s, ~ \f ::9.;Xe/\^ͦ|_r?i2+]Ͱ>Ѥ.{?&M&ٍ^l :>CQ:ixgsgQ60Si 2.y 6תW#94 dYDI"pFa<5D`DBErMdk?kG!gT12\ ,0QpXb1(#]L-pB!x{A>#IBuNw)kW\9̋ YWT4r5bm -\"X#vR6ū#(L2FV'QkPtN:nF+q6+j@v3܏S죐][&桠vٱ+jƨmFod9P 0NI^+`MWl9`SZ'oX{^!h7c e,e`bH$6ClٌQx(l~슈1"GD\3xM@P+sf`!(Y(C lJmSBX6F1ց"٢df Ne_NdLs5gXZo3ȸ8uP{fɮqqŵLj5hPմ6YF)kt !3<ˆ{CfcW< 0솇{uaY{H+|ٙUڮ)JHf$Vah= @ q$]]u9ᤔ7z1̮(5ڏMS3z#ЪO[*͌ D9SZ3Өu`ݶJئQ9w(rE)~ˈDm:p &aTkkLi% %%c88/ LD+A۠FH"I`ci9 um;C[g@'࡛TSRSK/Ƌ{KQJq5^{ GI(ު*rZ〇OSbE`D/m қh"(oP@ @ Zm@bn傠\qoXX˩H}tfP4T <€+%"MTR#*-DuT EFӖ5p֤ ȬDu 7;UثAENʽQ8rnJ`IrZ=]>vKms^O?2'W@c[g`s<$W3i݆mУ] 3*=A*J>@ZwZH !Iaф[6Og7$0ÃyšK d8^T;"E Rr ԪTfi%!H&n`ݖKJR Te3"G7 >3758y6#¯;a39νt޽#]؂)ο*nbo`'%gavo#3!RCT8QKd)g<|H "JA)c"y<鄾g@0! )w).]h_ . n=wP$, Ic?_C YldJFR(U۬t''\$8V,e5puWP4Ok=9qҸjTp|fyr1¯k}tr4{F3D?{UJb]D`84ML|p 7_~b5TX9G:i4t0ˈ`X'XIzŘUEXVJ֓liЂ@t5> .u=-B6Yb]ٰ ߯v5 ?zuz#@ǿ+:x냿L݃AV`N@+Ci%mkh*ТY|qUSn 4f{n.nKt?4.:@˓tഐ`•g2?Jͺ(2T$*wK~:)|Y7("('.Z&FZ_1C?eZzJv29EcRB0eVT-&J,7BcJ5~yx+-WxC@%B cbEN;cJJt =iPg‰g{'ti)ji^_z%Ɖ`/՛R no4Ia]`YZb2#"'Jќdsk,QSf2ĶuWi]lo)PNWw^It=(g P)a( Ai NQnMi(\v*x|ZU~ފ&b㔮x^f&7(l.CGO.'26*=`[T3L=s8%[?|v3_Onc[P4hCYGH(򹖜N)bxЄU Qw]gnBXrkN]seFW{x0<4xnOu`Bw4]xQ܀gs=RsZBUu$FKj-QUII'5{ ҥބ ӢeXf<2 78.b ,= @7޴wZcv'XwT@Cp6(ʽ I&,"%5!<'Z$ *Ju6vnK*Ljd<$=ƽW]N69 e ]L'GU޳(i IbZ*t:P`A-ae8;1FJ@P̷JNc3Z`(3*h2:Z0rD`p'ƥ5pnz3p8M!ewO>^)YmST}튭eHH-OarR1y) kpñO;N3V忯^TiTRW/>Ó؛FX<{>>X 0ܔ~*ˇƆ|?(ɒ7$L.JY%@(دҋd{rfW /`JVQE[m弄[-|Y^־ax8,yÌ-浓}_*Y͗zZ=@Q۬*LAQ1 !X(E`{=dp: zi/]iwzMQ;†|i:k@"߇n[Gix Ԃi6l^?ۋl'5,.>dWOj wibR ~zjt2FbiA̝R ĩ#f{7_)A|]Mm(1>kPEw{qVK{{%W6}*AwRǹ Zq08`f"K81h 8r蔃^0#ȑ-Gxpy1e ? c`K0QQ҈UAa\J X&JGFX-FDVHǞ IY s$k; 5pnf MC5 xԃ4yw^2Inuo%w"xfRӆ NaxK@ RCbLP{m:rE1W*f3O4-yn`ISx*#{ql48*QF96F2)ㆅRUύ 0}ri{Ԛ{Qg@4^g 5_c }7ge& ubh@k2j28*s. šnH~5:_2\&Llꗐ{˛]$@e겵,rЗ/ZKwcۃo_fq/_f#\Gs&559V9bp2t-޾ o3G@ ^, LJ`-]l뗣nߗFbgiu߿0ChrͻDEuL09O1sN{ϯMQ?]?d?Y+1 ?k '3J-JӫZp/<}<㛲,qʕc S?|wQ`a\gwl8hd ˠ5! O9PS@Cu\L%(U3XU[KעA(h~ I|j3+N//sT/57g߳ k^n0Jޕ6#"eg0*~5/EK]*I%`3%;mj$; hۭ̤8#Jm~).'iyr;?2MgPFqB2Fڹ{xxco+z<$xU{ Z޵o0 rĢ-y3D͖/ZŘ_ yn.D[ n), >e-Μ3:9ZOLPeu-ϫw 1`; y4]X։ӪM; BOFipV1z -u%sS$@x-AdzPcƺxUgJ\B/IZ!:2T!(`bBΙI"))5Zc:X1ǽf |Jpn \' [.PDPOe~/#E$h-Ah\XEK` . =@ ΰ:K-㑃!iY@ QoRT.)P )EL$3̐/ʁQ]\=@%4/f7M XifGfwnpc;_چ{pGhm.RwYBK3ctь㍯tgj"(ԑ+@hwR9ed!=y@[&ӧ6%|aG#\jΝp|ޖHZ rZrA\JkBRZPiDZ\N'/˴Y܍|52oEu.;$`2NOR2?? q~;K^f2Us-z->MWܝ^޾)KNL>V[lU5M~_4S%Lu胪d:rD4ب̡F= 4tB{hYe`!#nswT {EGr K1jDzdYs<Qδ:L5U,G,"j(ap 2Cʂ"j+#Vq<9#3*ej+jU?.lO\68|{Ul>:魸4Vk9Nl}ysӕ bi #ʮѠl)#,Y[Ʉb6Ș "in@+\Ut_zl`v]O % 5|v'TPa( 8ǭs,"ͻn)DϭPqϭ;qb~u|^rL-WJl20yMAS :kzMr|t|gjF} v]S0Vл{q~ O:mxb΢G~ Z:ۖ[,\llpE+sMq54s]v'ihK㴐 %ongC?p#'?^/|u`XVOH~#s/;_QnOXLJ#&^A 9&Nl{g^=ӳKXgH+~6GzV4))ֺld|B σ6S0dLG\V)Vf0{ʡS\- S2Juzj@It)rVyת9]nt2=}+߄EJgqS?-ʥۡ8W`lB4,*p2$f$F->; e̕4AHå GYQUCdbl_ 4&.FOGmYF,+3|r>2 -Adr^q B, BihOJA%a5Wn77!7Vk$U!Q'Z7\SH|tY ༕;輨K!4IcNe [k?%_gQWbBp ^pG\_؃qθ#j%TBV  d*;3+w"LY1?WPY/PTӭr~ f?=7;jBrGOYA&"fiI5Gja=7R V%z{!r6Ӱj;nsP!`i.np!W]L`VuʲiBeu_7 6 Y5.7B5fFcF?s-7Fp<'4Oyasasaa9! eɈ '%R^KMͳdC]?=!]֥Hs; Le@d41&d=kȵjjsra)m8#gN6ek枅߳?Ѧz#{j'8Dm\LR6@FE4Ă$TGHJGN& = s >KKi!ZCTCRr\Bٙm?pV,}" .&tl0C4#C;Bi0>` ,xoOKn>0wҷk~q<6.^}>cEJ/R2ۨn[$Q @u轐VSCA*Mkʈ.|@Yn|G_ɒdM=A,Y }dXͩμA CǭϨ?LFiN"2*@1Od[ "Y9/?jm&IWKufߓ6&K>ڨSWo@P2P7x˻"@r6S\gy#4XnO?fuD?te:v_H[-a[V'%h8srC'z3QKPJЀ5"r\$4B y$ LFE#}dL#T{UU.sY N`9ޗp*.wTzͨ}:atB䞆~! RbNJsFl3Fۭ((;:UsZ\=/(B)aaάqnDxHڳE8*E;HH{K|T@ UaZ]{v Ϩg L-E TɒfAK zk)KL0`}HIUut*uP hu<0cp[ՙ!e_J{*3a>g~9~8.GIzYȌ +j9[(r])ƺR2YS-VRfFod寘 i($ )Aٗ*MM]95]Ϛc`+Scה>5/ic:g{yt]NT(sӴIߐy_71^nT Q>XG'z|jk;F*A|<~)g$jY^Z(e2,đLт%6AfԥkIF%fTF#S#Q9Ja $~.NQ1%c\F%U[3V#gXհJ5]X3ՅPY^URTR g&kw<,3y׿Fɷpx;GE :?{WƑ@_6Yd߇a+cXFH.TVARE4؜驪~ ""^p(8007#Oe|i= $It"%Ch |;4x:.eOȹ]c/ۢ[jm޲;`q<''uD9 `xF1xlDWN*pְq,g-ԃ$A#Wxq 1/$ GQpXG Mq s>,:hRۢ[jDѲF;ӈJ*zJ #! TJh5BHAa-h sUFHHq s: ӇW_p^~rhn(#qo OId9[GUw<82c!u`h|FX #[#lTn^4cQf~<}1F="S񹀹QgF ڀu:aDQ!Q:1UI+PR2¤L ;i6(҄HXZ,3B7C[#g-F/Ó}ׁ5v3ZKW)5Lu!CIV!&Fo;-6ڴs؀s~b<~!mx! b&pC$#! ǝ&u';90dVX1 rAPQxZj,X|Ծ]93(* %B# b .hQj!r`QW!HkvhڲFΆrV}bV0d!K܂:UثDMʽQ`8rnK 1&isrAcw"Ц\|/_) *'aYj6:SZXsBd?13Z3tMv~oF=qBD a5z$,gZ 省(I\Ev8ZQy[/k'ylHpdž?d: 8{͔1PGa/ KL\>`Nգa}L)-6z}Ճk>JT /LA. @Y%2QəͲw)\HV›Sh=JBW%m5 "+;߯|,P}f%/%NMy|a凷YYcVg~AKf%sRǓI]{_&il /QZIp;>TB-jkG-\:w}*_MTc9 GJ]'TnuEA>jńR iԂbdf2D:Rb˕ ߻!ATRQ¼ Z'r XeH 0+5a)3|n"s`K:n0KWY3 lhHwD@4:&,f( J3K`@DJc'arvM LSj<,QEj, Bb)JT)21Xc闂]AQDJAR%%ӓ)E]LN45qQt hQ |5Yvխqgig#k>6.yq91ڃOлSyq[d#_Rvzi =|ͳU[pyC.Tu?NE(LUlNDgU|6=.\6V"sb_aONyz1KDp4|?KJ|xXo>0<`5DeHiOr}[։wt`*c_W?خ{˂5xYpơܱ~Wjv a9OUz`0Jg(V٬D\-Bq[SJ@T3d'` p4t\yg~݀w!U-[FF ~1MQHFV*iZZnv[VE nz7,=}MxQn'='  Lѝ757~BɚmTA,FAjJTԋoa< >;P^ NiRUxdLQFDD b FрG!eLDeO5r6aOF]ܷB_-mH_5ȁӺ-Y?k\pfdc鷒B|\Jan,sb j#p)6ardE#$^޾쇌ÏX޼IU(\4$ pߙԀʃ#X {#,#"uMydcOKI DpC@,9 Q ƑCkܮ۝XlVp7 /rf -w"۩XRË^t~wx(8-EG@L+#"%0m$s`vxQq*BmʙwaC{D)yҡKJr⚷c)g=?KZ~ qwI䜻APy^>) hÅ\zV*ڒJ4{AlL3n]Hu]  JBq'XX` -J(E=iq୆U6F)D;MM.x뜡TQb}MnQl9\ZuYC_LFw!\ GBN;Cr}Y\Gldz^k߬\dmntŠ:GU:T$T6/~G$[޿>ǛMa6mt {6I}KfW]I^}ϫfLZ'dz$kVDtѣYE;V3wbOEGߝԤݟ?ԄA:"=mec3tz*[nքy,B4]ef٠uRLV~_Ҋ'TiVfWZɄߙ܌`~.jq3S ފl f ]S~_oZr);cݝStwN)EfA t񣟖_Fv- H].!Z^:l)ٿ[4"to90Spx@7k?pe@x$'B$IȶEFEQ#I .'Ac5hF( wRII(9+NsE,3jH]Ou42QǸָ,E≖4$O-֪(N!E (Ngh}Vu˗;CVinH^bmz*2+U;=ub,o#{1.t\9K-<fNq|{E:Şidw^N5ݩ.{24 q^[>h)fT.(NDz<bfBq) bTmA $C jEy:DL@lk/Gïwht'eQe,#_գُ,}yRz'6hFTךF2rʆ!Q >YyTڻ.lJbp)&wtN`Su󖎻wH2fOm˱me7!VwA) Aj0SS: U-dz:{$c; EzގVyжd;K )HͱKf]8F;AZR0IC21)o)&e@kuhc?nIp>2 ['fXh; J u LZ qeưDE1RPHS"q,o4S2rp+r@+Rh-$JO1HR*."`^:( NP"S6}vO (DqŜi0>PBZA=6#`A -;[f5s_=d`q%%GI0`'7#ߔkvy$j&aY46A)dI& f&# Rd\xXryKKV[M5Sˑe"B3\%^ry!2aC`HJkْq? L9B8/ e'rN2JhvѸ$ 7I6X1&wd8X-XT'qu9Tb'ȴfZ%k`TYg tXy^U a11$' B,@iKY壈SyD15S1L7hCE~\5 w5\ ܁ļ’zV^5m~RF"{K5Cz?(v2|Z^peEHZD.j#4Bq^Nezl^Z)g))&dIx3hKZ͹S"8UNH\kJ=!&ji.*`wItm F8giײl/o$3Цj! /Db2E"$\:%*4-;״ Aᘶ>q J͘^i%*Iəlb&R鄩}f1tT\ œ =Ak% m rj(%+KPȼ.+X1!;j>?KS72{6x_[xR%N>|U?{-O$?.8B7Upn 8\I_?v7/c1i!G9*E'T#5a>F2W(ŽѹSx\h?fa*8GBy1[ޠ\?):?8Ͽ~8᧏g>=\q-0>TI]<xP_NپZ!FXެhaeo=mmmrC$ba+2嶚@*~ "u>bŬ  `q_ժ޸GTZ\aW2!}ʁ܎4c*M>r$-Z~$&I;0*pW}-8 DK F%ô\;a"+ߑ6l^YM"xdn]: vN,sKy eVjǀ][2rfR$.JPg'7)ahhmL8n${Axv)JYU~uKWw'Y6.mqт)(aY2y YJ𧓞.$$%hvk9ׄFbY""qi.JP$&iN4yap2.@R'eL*$_`㢵>1E\Om'Ko:}+tH1 GqjЈ/\~ɑ=WRq?v畠=`"x vB\=dS0OHYu.V>PQ;FsġNaQmr)a :(0GBJB[!QJJ 0ʖ>T& ĜURuE.VgMd3):5c2(ШR tR\o+yxjT"zlpPAm@ }G}#MGoð0` >qI e`K;G:FהH٧lI@/ 3PoZ¯ߚ~&OTuUS;TjJQ],.;k/uٳ\z],}].;FpV]kKIv*xbu{}5?A2A*M! Dqrvilw-H;.WY"V<PM'*H5D%3p11ye+y꘻x*!jc/( iHV'S|N+*$熯X7BE͒{G2NvݯYHu?PC:iOvI)%Y0ƺg"ZOsD[+Sn$eBGDE=f\rcU>shbBĹPYQ4~0rMC]Y]LO\JJh69kהAht6/B5vn53cUTThBH ͝"0Jע,&(Ӻ3zYJKESu=.n\-{M)M4Xw%KI5H, cшx++xXL;C[vxx  [=l:y6;>/ l(u'w-3+͠B6X(c]8KF02 y6 ɜX_)ص?w'o+YAg11"#GHm61%M'RPuPFk!h \S¸G˘fQXJ 42,/~ZL$:!RY#Ak-uFDW7r+CyȥC@ B-~#B(f11 c/Ƙb[#䊋΂{;{I$T{c2i1V[y' Y(kg$5LYBQ#5"'k’`֖pMڠώ8 Y1qnhgmL~] w?K# &#RDDR 侬)LP(4FpK ||o%MCow6$cdONV`EXu !(%)SbT+̆xO`ZM#G˽hn R _pVR:@| &ns VSj :P&0.SjJ#3΀WHe|"q3kȶWx&׼yvz )*KO3́Nj`bYbx'ڥtYJ޳؂ c;lku!4;qpH 99qvC$rj}[b FPh%AsY`I:#PcV!  "2I^\+$g1< U0NZo XflBv (8 xJE&qF,,5g3%C("B,9A](ݡ8ȃ`5>k^tlqL>50Oz;pi6}/x@S$ڰ@Z"]7rMHXMy 1qGrWu1˟_Iw:fx2rͳWt:k^?-&˵׆^ mimRަǦ[x6bY896OӃ?Sbt[|ܱ#0'O|U{?i1ꗓ`0Ofg?<}kʚ0;˹ΐ T'_|țds1sóKz:Ӄ 7>Tf1גN׍KiosSsOO~ Щ?6b!)`8ܠrSmݽ]I"=:OūŔ3>;6,|I?a4>J&H if hM#G[N+Zck+EՑOHm)Y|+ZMhh.z,^c:'r(Ә&),KzN$[Y^KVé4+y'7].Tq=NdZuGntP;g O dt^|.󇛤c>Z5qӣQ_Rڡ=t׷/98k)BwSEOߠ1̆ 3Ezt"trzi9/3+dqҮ<=޲H]R5uz_SMt1o7c VML[;?( ?LźSc,D4smCuWn9`]O'FRiH`3Pr{ꙢO:܏A N/˼XzxO^Hdg+Rk &?j]ήL IZpj^( 4_=|38QSs?y+nEɦ|>r'dKF/GBe?Ts_ |"%a猺ʉ=[^g kNII8(D:58/-Q-T%f[3ϽW/ٮ'sb>Cpe̯W@j/ HͳӿpO%pū#6R䬢G8lu=%[hj,uvVv## v^>%1,eq/LM0Augj!SFKF~)^~ϥ*SAl)n 1 noP"@=j2CoN)< GZY*f{o}ޒI]msQjF,<Ϧx::.;1(n"-sq!Dlr2L=\gΔ'Pjz<ǃֽMZ3j=n[ = Z}w5mH(%=e1s=XԤiuMPL$EQE*EXx/J)*wmۛ.`  Wsxo ^Zޑ5c>Ƽ C&O?GrnڃZl]X;Lh#Y ,PdKB!ulz,(D%Ev``R _ `oh•e䱄+x}@A9Dqkmۖȅx*S 7["xpLث׸ۆ~WP}6?p% +\5Z7pPcQ# Wxؕu8e)k7pPM+"4fLJфXUCUC9w;p%Nz W .&wк~p{v=+V֟v|^2}J1pS:<3Lhqw~ss{8LM|P~]֜_f>hz>uuQQ6џ-HPzTؔyBeȈusU[ YigE%@ӑO:>&=Wy6h\gy-0ݣr 򱭟?ȺDV~ቬ/zu__ToUݹes1\HC]l| a*Ln-ɃkJ)pv>}u'` a5ΕfݍWo%g}AL6h"9^'apES2AE UD-<&udPՐHK5ĜB6U̾jC!G2ykoqy[O=\~s]6bB'|vw{st}Ea;!!w)' v3_VG_˗͗=ڔG_kz9 Eq?}&}._64"g jxo:߮ݼ<3>\&6.hPni7򏋟o|ܣ+wfG{ww ͋Bzg^ݧ@2MsuKw7#CY{|nit>Vʖ)^vrUˣ|{2=m4Bt;zغ7+ i9&'{I?w1őJeth}]MDڜQ+ќwh\cFy7Msp[fw_]<}19u~Z{4 ֎Ql .v-2-^O2X,$~eҮ'Ү#϶יi=lĶQR=RuڣO%Uwm{JgODqp;ls9vGS׽{4ҖvW<loU}qfk El\Dc{m^!KeѾrv9ܜnAS^$aN$aC9̺hzw_ZIæj[ta; V8`9&7ߵϨRZcz:}m͖9U'd޷q5%&U]P> Ts ՕzM'U9*)D8dǷi:)kUs)D$3*BgM&J6D{opg_e[ JG ECr)j&K'oҶ;#vzhzI)Q8$.pHtM ; X#[IM)~ebKzyUB$NۇGhy^aMkfg_ Ξg 2yp[|  jL|d|l@>6+*}]i0jwojɠ)xt ^:dp4[6hƲiCa[6VpS^cr;Si^.;$:!\Кs%6áKμAXżDB @Qmm ڹeXĢXT @f $'G#꒲ ֡QiPdo hh1bp4ʿr}y^:$!j{2\VAEWIgOYjR*3_b: -GM9qZ|:]LLu<{.*%F AFLc*lY-h0WYP57`+-\4a;K+SѰ=+p%AJ$cvzg p|eP@BQ)L-> EHjs !GDԩsE:0i-g 2tᅧ/qJjK1-) 5t0 KJɱFeZQjH揓N:-; +TJ\#NG0fݎVEF1dcm;YS%po [+lٗ밺l 35UuiEs^~ExyJ.8 63RZʧz^,LbR*S.nj`6+]ېC:z>Du>ӳV B.*sdPXG+_s@&d[Om;1j!ը|me0がun(ћгw{L~5xy![\-/y`#^A mjA´64kqJee+'_U> U9CSrǹD`aUiJohVlZRtHWS6r W/T\CeZ/M~=Dө&bЛ[dy p\,-s9Dy;M>.}+RF[!h#2Z2wy˗xuw>*AygZ۸u~臼zZ/4 p >mq|a[+2h-i䐜qΔݳ_%P0 8+:mqk!zrfxA?q&|_u(TD>b|a`ΚgG81 4%iaAϗ˺J!Xbݠ 47J#3ݚNK5* PCq?]-kEgaëUl 1! |Uq<:8lǕ帮lդth|q~`DeoKo鸯ߌҽͬ2[XIO0i*>󁮗mGMӢU6:d_}}MeRa.adRէu(&H b_W+bCx}[٨jntr7^&oc_|F`, {}$|g\i~yMC{b7iZu77iWrOWG Ƙr_O?Q5'ebY C:Wl.\WECՏҰ.rǾE>z#AsA M}Go Ǘĭl$|YMWn#9jRLל"1 )!2H+*Q% U!1Kl/0m2]w{ar!Q&e #)vz-5f8r[nx≸ns4\-;muQvmcMGO|6: ui0.0l\ ;^rÜ%Q [ZcI" 쐨k{ҍûgngMCwrݹsM& o h]q<H[XX KNEOF}0r6ȹ [Ȱ,ę -J)\"s\Fe!5myۥ/k̓f[9KOeD-PsylԞ<#T?2 -[v^OMc8Hv֖RKJ1̀-(Ja^iGjd>9Hy8D%KzUy6l.`*,dCE/ğf_\K-jekeBčKR=J<ȟݸCT{1?)b*2JdKdZnp7,7޹d2BL醹@Cp6(ʽ I&,"%5!<'Z$ *Ju6ʼnNɑp Vg'ѧn:[B9/-ӭy_zl,:IIbZ*t:P`A-ae8;1FJ@PM1`-Xhm 4wF-yQ9b0XJBԀPErG/FVzrJxP4r1R#!n%F.Y\6 DǤpu}i jP? lg\U>aTL0ewMKh&rx,]ZAԮgwMT!ט4y|:?-sq +Ko? F2UXZ|Ɉե`GˮFWj$띋-v(( #D2FD"8{)mF F9mC$!G eZ3p H嵌(&ʡ){Wf#9970hAH$*M)YPs< X`Qe; "lyceΕ]57=!AZ/MT86ה,I=gyXnheE}L$BEı3{0Mw3)t`aPX;.9>N88+2 %A" ,ʃ @yЬě22#XP1g&HFl\tlX3B 逅 RfI3Ih>lLYU׮#\?mҌǓ=qĎӃ (b^F+|x`d:@s*R^!0TEln0$cx(?% 6N0+D`!"@VOfss=bIWL]Alܱ-j̨4؍QEI)Q%%Q QA!E 5lY˄x$xPċfXю 8E#ȁQ%1S$ȇ;-l"3"D컧+ԆS SU8sFX9:xP N*zeeԞ\P.a!d+:=]x~sY&\~_FΏA~'OM6a>h=5:HMJHڣ!0wJ!/8:]fQz1+ІQhD聕x:ruܩvy_[#|/֊{RRN j#qQzwa^!rƻ lX.VQ)+UAafjDňHo"{&CD{4@4^JJ8'%rg,Yh6.wЄl\ v@8T&Ք [_[Pbs5I)eNUzS}*/1%#J"q1A0 pg:"2PT$F1W*f< =<5*XT!c ˉqHH  (<ʀnQ.08 ȓyn8Lx襞K@SF"YЌ3 s*rɔs>/7֋ETȡ8$e:nFz`[I_IBWe=qGaa<%r /؎`$SytX逬D\3A6N{ <&o[]5O5ֺC4V _ئD{{o  "W^Ƨuy_/*JK--qdRR!JsZݫL'ϭϯj :{^H{L{Qrߖ{srV-xupK' nJoOf翼|B_ o/͋ '+5Gs</(]q_2Vh9I%6KJ~;Ω&t94'c_^6dRRx0l'rͲw&sޒT?=M"MjBٷuKLn4f hO!HW ٞ5AO? Z)H4X{\&mnX[JjLrD>76.YְhNmȐ!j;*'1ERZFsMᇫD7W6|<;w[SS2q6zn~k'uqMn<P@Nnҫ( Ū:/`cҴDPWI.[ PWzǢ2(o&]im[6Q-ʴQ06oLt 6Uܚwnl[$"r PCu#7_9/LBa%=uH@S.cK:##2 e k|t yO)!`Gxv;T)aL$#$,`13]yd[%4V^@:̚}݈7{naKqz1ժ Œ9K˭.#)+2;2G,g bq H(eP:Af%RTE ǣ*&r),wem$0pTGVUDyw GVHh4A.Ah@ݙ__UZ{Xn*:tWl~[U5rmmክ^ \ &AW\%/WJ^!\D!U5\:q)pUE;\U+i7x!p? qȥ{+ W6X]=\sNzbV>jաp|W\ {ચRZKw?^!\)e5^b3Vp梻bk p8WZ xApU vb%q1pUղpU7k+/}p٨ƛOn DvyF6rrw3y;+Y=ҔKtfu㾦ys_ƣ_ռf 3E$:Y\M3 ddƦd{\,3ږ/'8i:DCG?\ "t2OjYkW?~ {ݛx1'})lQAޝf8ÊW|jEooG3oi*ξmY#דQ徭w+\T-΀RLsD@=WH 0̇L9pw.+Vbm=DZ@[0[FAA_@${AL ^ ScsѺKaj ?w:S{L Hm촨 toB;CQmܪs/5-9eJRPrZ3CļyZInzRG|)yZ);"3Do`-/g̥j->+Zi@o_!5y'5F7 W &D}y z9n;:"Nxu4)܆s1 }4\[̣ϓ/OJ>}}/9ئxVu;0 t&H c,ÓV2 5˃ j'4SJ]Dh;잊W$bD1,C&R6SH% %DNV sȰ.iwnVg˻;̧:9 u}@nsLrZBy֞~3B~JO+`Ճl%g-@>[4Qu6ܡЁ,u= ]G"{}'%GYt""{-' BͶPJ r^#{L ATQ\t p34%9NP#(@.ZbHiPL YAe:KL2jLdsn|n'7}]Q6n?j:1JnV?sAyI(fiB&8vQ$d)5)Q58rhRb"0aO?k>[̯h\a}?gn뙴+JdzPTOT&sIU`,z(]=t*||cNDFUrTGEpVŶD+g VB*(emSĺh_BNt7#Ʌs=bQcw9]p00V82YW~my@>UҐ*d;$"QO[vIsJ¯T]Qh5uk][fE.9٘sCfڡ TJH3%E_lB{)pک`#d=MG2&)JQt:x¨dfg_ۛQ=ZjW}LjɈM0Ek,_h?ClOs^|!R0" sdM%g)HPMR5U~>UHkBf< Bm[k3N0oEh*hK1$J;+JĪ1uBĨ9CC1){lek՟N%iSjMp>>NMomM}Y$c "tw:$!tEڏD."*o2)f CO COt}'澔fwcL4cRr 2N̎%a= vϖx4pOQ)^?+b;o*৫W_SnK?'ysه.28:^+M~~ Um_iD=P{mtm itTgT҈tBE/za׿N) ,+NYwxٕI3"SyvOA׃EriRpds IVEp`2tlr2 udBqCކPTW[NF.g?zض 'r/w9r{xmgANo±fR0QrYT^Z0@o PH)rph QHB̓#,>kDJ`W0*,K2jAU[2goY/ '77we[k_{SoibPgR:+4rk>Iߦ`-c"EѧN$8M"A=qP=UUYϲHb(Sq.BIJўLDŽ N c&)BX ׎iWK~/ u1P3QL)D3_Ӝ.Gݼrڥk:ߛCn<T.$ ُPZY'nMS ֩ @tII:adB~Db11REA9ۦ6ךP`@I9R}$dS"W$[,} uoL.(ۘu63=ݧGW珷gu9q4Xy,~IXAku)$`EIiBU&޵*m{NpP]B1R*J1p8Fd[je@'n̟hg#4$F1w#р'ža381qxxoߦww5"/fF1/ٟ7d?+Ș P35b沰,ΰna.+,'kzL{S;f4#B(Rܞ}lele}V֮?Gٲ=u5ԖL2#NڠX,,MrEXRfC e zA[M/r;>}+0 =yyzSaYj6:LIĀ)- 9%'8vVo N_m^'R"NH1үZìN,<O8OwNk%G/ dka‰F9oWր7Vr4ʵHtkY)'P؝ex"ͧ!OFoMJeo(rS9C)ka(02σ`h ,RHHqNwZlPƶBorS=S}2 VgFݱ$m#h}\H6 awOqZQe6l4?{ @qHS՜s#ʉjQTryz]J9L5pEIp@6X]4GQ*iԂbdVC:Rb@Y/5NY|)!a7ݦ#Vhým9ӣй^3*=A*J@ZJqZH !IaфbO>+hD1 wyjΝE",HwD@4:&,) J3`E$WWn KB"!ũtl:X6"0J&-`ۈ]P{/ &XqAQ=R^п<-ۿhڄ@1%e\Ʋ !V9 UJxӑ|GHM&rdT:J.i,L)B`xP1EIƂ盧ݮX!O58Y _ˉq0S,ZC=؂OxoUyQ^rr0ه7#Sx5g9U< W9w3;|}Ba=)[>T` $Ƀs S70eNtP΀ Hͻ_%P0 0+:mqkB*1̥Ku'.iSX;RzÃE%1kfl|\Plb*FRYW_ʼnf'GHpX3t6hõ}_CB8h1isr(U[,ը4/J3iZgutVGTo^U/gG̓%heX^Wr8yb @ŽB!5#1~H']ÐafQP++V1Iq1:m\9<MNQ Z?j;ɮQeRG4 8feu^씋5 ΆE:s߀|㫗W_?DWA V`An/-i_CxTM}κYMU}Nc #_2ʘҏ// ti<Ѳ`•u3~`ɷ2sMT?J]@N|B傏t%\/˵YMZ6;#LKONkNELA˨*ˍ{l}в-6i¶JU9ސdzȨq,a$X|:t+/F^oy!:&u.Qva;(0%{SYgyXvT{=8ׁ<]8†I Zfl^Tr-,.?v˵UBT5_-h hvKh=5:HMJX-GAWsGGՐAԃ1Jc`c{ `3I-a$EV 15ZftJ[d¬>FCQiP>NfmG%a5NCk~iӇRG|JaX03%h ?9ot; R8pW݇cx, Ӣ&w(2@UPF5DňH+c~p3"R&qJ7X%0l1Ռ8o~Mm ][;nwCof5*>u̧Wl8#PBA~4?.:JK-q4gR\!rs_~n:x|&˷/1N,O%Q~kv0xzw<,۲˾.sZ|~9V}:+Gp&B0{y2el> f) pmd2JR'ާ~2'j^WoQ(l|z̢XGIj):tڙg?Oʣ=P5XJI7\r7rغ;*` t]b4 Xʎ}M5]W G'$7CP Vpp=T]%\ȯl *LPɲFٮǕIzڷ@Aioi^m!qiyԘ*TY-Kwg\SĜZFXo gg "5eP3RTim@m)<ty1n+M1ؚx< 5-: NSE~ݸqV90aJ_0%(Gw BcH1WZ_;4++-ʚً'4ox" ڞ/T 2t<_If0U*oߟ[/B Cm9(j׏zr-c1C,{0y LqY̙C:<NcʠvF7e$^Do3` ZGK!`22=՞4#,RFXp4i؏ixRML %k/`/I-3ϼkYUKt L2)޻w2Y:ZGe.#9ÁF#(82;V{\,W?ؾRF.(O)\\b\2J"ˈ AHRVa S띱VcNwz-#xhj4BZ"zn kּy"nR̦77qfz JT:lw#\ :"ԯ̯3y61{E׍޼dViE^ȵ |b"tud80?>1Ou% t<+vu{e>2ڞWZLJx! o`!mv!p􊎽 'eM0+^sI5M7 ;ntxrDns^pdZn;~lskxR=1@vqr%a5~^cϮ +AkA).v]t뱠Įᰫ&] mgW Jv+"d+X!`Uˇ®t뵫%;vXĮ =vક-gW Z*AvcdWLr+Q`UW<c0A{צ@NzJ#MՊ;s5tr1;oP9~bZyAIr29G4_?^<ͦYj`soxayVKO:yq}/&V뮹]Quɷ8o+J,l}HiK#W4~*d fK;z yS!?MmF JlA6sh }Pvb&]=C ~CtՀ68в=t(;JWq+f38ZhJ!](aSkW,n;fpeh:] :׮սВ姡t=Mz;j++BWxT[}ҩt 銌+&] BWС@:[t]}u~+t5>վ{$3ճU&Xwd1 U!v wo8n\o6*ZN%O~4'GWR}KᄊumQ޾)oWhy:]\#Xn@1ws}n!dž1Jzro]m*'ᛕ\$U:X|wS7_}]Of q](P0R_曏_''noOe7[XP*#w̯eaf5]̔_@0'ޥc ]@.ڮ;+ow? ?]J·[r]K"r^{A&o6ȁLn6S%4Pl"ƟTe|mߘ:~V_;4uv;ySr '[`m5rӞYLɓ8m2 n ׃_krl1ETLqq)J% V]L <1vV:zGhӝаBuS&b] X'VjAC"ZjSkRkPFZ(Q-sͨQ^m9EJ مEZwo bwK͎to5uzPR G,fLsETbKj\fhK^3[.\LKN{fWZ]hR쌱ZAhw!N #D㒳9w `ߥ_ A5Ƭj*.%wF[jl by/h"?_TگlT4'`d3<%cN>hKh|"RXR !ޝ7!1rj*uނLC*)V@r=ڜE?—zs;uM>,M1h-b'ԈK҄99|J^LZBa-kB.`5H3EOLjX8f"BmG.*5ʋ`1lj^,T )*ImA3݄f$EGlOb fnJ̒% LHutAڳBAfaԆR I _fF U%9DS %#<-Wm`0OB[.P[VGCj(('έz<Ī9ѕ2ɠ-.dKgǚژ[P %/DKU`(.t-c64dc*zE暋r LM ! m`$uc6 s6cPvPڑkj VQN=R Ũ&؎~o}%C Xyˆ fCА"8m\, |j8WEo&)Ci:V2Tf0ZBugs d%TZWSJ+;!͐jPo]JA 6.[@C&Xp,LhD;йf_)6#ϫ2|kN@x`1g#a n1?%H8̳r  2XS(u3XGfm~.aV!7W-썁D]`#)#-A(ڳl撈(EzeH_uvV qtuBF*1H]Qv J Ho[!=#f^n*1$$3 dlYyP eɅ÷AviAhDC,d35_r>/bF\jX1o8*20B}~ws7`77ve;ߍe]z{2jQB-⩗tQ m500 [Ayn:@Gֱ*"g+)Ԯ@hW!ˈc*,$ `|`a3"=h5'HiD.0iy],}b͎[CL`偱`^4 $Ae Y&"2:@vdm( o},\d!T?3 kyMc'm0|Y+ 1H_"d)^_n)di&y\T.c%Vi,!zԥ_n%X"Q)1wK$fN3 I0Sa^eYg%%8ZT1C;V@1^!"5C.!4U5OHG#hop*, (Ys@N !tV% ؚbBi%&H?<(B&qõgy8]UfM2`x2hf RNXZk?vڳ64M2,ά!m6R[5 #\P_ ioU h$}Ao#%;W3ڧ ƁB n ɮ ~Nw˴'go LP.n{G)XA'CEA:4'=ɔf1hoTܸ֚4ZK9O)'FCoo1f POzry4|}UeFjuaRC^":$bCJHoݬ7̰l+TOW XG=)H\M1n>9Hc~G_uW +@ +#TK9'5$vgYuנ4|ƅ7_ApW ܙЫ8pڠ@2XA!q V NaAL§f@ F\ܸ.dO34%0LJKFDt5(8q˱ %׆.Iv[1x@Ap&.ê1jʅU~T〈 X֒ c6%Q\_pIPZ.X,Z;)jI-, ;p x\]!DW F1h0>3M856B}܋.A֦~SK]I|孒֏[Ün}Vjk<3.gkΊ?/zco>Nq+Iq謔w GכK:=;jGWMz 7'_܌dFq8>KۋᄈN^q ]ŚqӋۧos|7[Ch-2޷ˣ ڸoˣOmI=tr;acCҝXS q3wNGscJ0Af%sB hVJY 4+f%Ь@hVJY 4+f%Ь@hVJY 4+f%Ь@hVJY 4+f%Ь@hVJY 4+f%Ь@hVJY 4+f%3覞 v*Wh3@w(JgX !JY 4+f%Ь@hVJY 4+f%Ьδm[KE/sSTc_T%73&|8" D:Z:M9lwlr 0O8/@N r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN:E'#'6FdB.'\BhN a@@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 4\'f?&;gs܌@J8ƒwJ. 4D'Ơ@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN8kM6a4nX6>v _z1.~T/#L ڟ 7%c\gc\Z$o\Ɛq1.F|zz Q\eqOU?J+WpЦx2 v"\\s\+T)j+@V<\\mr:u\J-j{+3W(W\pjMtp5@\aҐ+胋D~Dr@p`5r5>|&fwrجfIO-f?oߡMG%/v u]a. 8XQrEeM&/)2T~r(~iX}>{CW_k x:svk{{t/֛ѽ~0`P\*)s;ϰP^cwj9#Cm/Gmo9SWm;-w>NF_S\e՛;=dޙ۟.3jgћb")iAs/&v茳7qaf3wڹF~}pI |.6Re\r @-HI=TC  (TNɏ{H{%]r^.ї6aeWz  Z lZnvo p68SOSípzk}䳅km[QvѺ\jk;TT8 ^]O{G%4/" 0+_- \uq^:?P^ůz+ITA|-{ qq}[H{?ͮ}a쳤3mqу`} )eUқ—)p} 51-k}T˞2yRpXϝ(Zނ\k 7\* o'  &dѡZR44;D\Yg|Vi)t lZ\&RRq<,T6JdEj]yq彄kUFݧ j=KW+FpOlz=ag/>'K(z†^jbp\REb*WpЦɅW( z.\\X.BB+T)=jL+St%q)|p%a+ %]" J'Ic:@u&\\e.޹q8c+/5s Zݔ񉳊)W\X24vVgCiȅҨV&* Qzh/nZCXh wSfu=oƏWsim^N0]_(jj}*d^W*:LIlRGV󺪊FDذPxUQ;'J=Fpi Ύ**\5/*]yht\_FpFnRHF%|fwQU[ƒ*;:*'Tl [ϝ-U Z:P 1#\`Otr}6 :Pt +Rlpr-WV1:P%~F2'60)sU?ҋ/}If萚1\ \s oAVRr o-! 6L͢\ POnTr& WĕWVpr& wvO͂Rӻ(ט\pj/eM{qe=9EW ز|++?DQ+T9EW |wejO>TJC%܉M&0~RkvVટJ®M.p@w3͔Wq*dJ0W }Έ+\Ef]B+Y׬L$Jیp  @GWxq}ڙJqǴW { @,u\JL m#s(6el^rzO(m>F>{`P+HҨs_ApmzM'm D.L')ӫCR߬WtPeӻZۜ<lT>S(d33jL=]y ˌpm>$(һRySRRq`Vd+̅W(W\pj{Cp5D\A?*w:\AmB>\O\_KS >>r'_cV\jI^*N'\=O>2+.$o/Fq*ᖀp5@\ ͔ -G.gL+P'+P A j: yZ\bK :<6)qd+D>7 _z;j &{ޱ&? /q9Uf*̄{;!N2,}e4lOLTo+mϒ,Y ڞ%Yz1;]A|۾ܶQЛMXvCeMtVc*Ch*l%g[YGVY2B2{j#(lzy6sҨV$ףJEOMceV< X1 PP զUZzj:D\Y˸ W X|Qp զUzj:D\9+SX鮟` u6\ZǒOPF![+lN)4 X^ܭO4 j:PejWvhWBCl1[at]-Vl^?_ݔ+8xu i͢ˏ+V`w)^jtM!Th|ґ>*VȪr_>mZqu'u^[V;7o_џ^b){?w+zuq 2Rmq^]3+)7%„T2:f*\ӈh5S;*zՔ*2 nzL_qsgs V9O hл9;M8g&nJ]8v:x;3^{{ۀ7;,hs,IPo/_mJ(ĿO#Yh*.PB9]ޫPCWѢBܕ|[ûcD6@~᥻|= yr*x߮ʋeh.}1_PԞD6'R%75\}lbwt浏ԫY^]J 5BpaZE&}ԾFױdT3KYEJ-b-*V@rouq&AmLlmJXSbvHS +sn+*U+Nm/,;ע dCwnw}z ^w}MG/zn{;7M~w[/ :nחYǪݩ4 \S0[uK|NMK/9\/}6TU ‹ⱠE}h_~cbA&ö6QU-p{{X%4ή"̔ :ICT/)te6ؘ*m禪,+\[98+YSJ+0qȑ;e,^hg\]5;JVzyqv69'v\3fgSS@ pxpG:GK T56=64J؅OECCs{P ne 1 wru%Ku XCySX׵V D):[Uo)uEIvO?^/כ_ýp&\X]ݦ]3g>]AXpO!`hԊW/RU.`2Qh7g:n׏F;G՛٢.+d~ի }A(<ѲU >M{~Q+r6U,6K91 (fsƧN˧*BW9gk,h3_wKYBtf~=4ڭ~-oftSgPöͨM6[BonH6Q'0 )65LJPаBTC +8J@9O}: (2-=Cy#eF OUXu2sx#9S6VQj TND8 Jk:9 &xԹsxTz)=b^zO)E_*t‡gȑ"؝ôd@fgg"mMdQI߯zXՒ,ӱIJMVY*Fm8HQ`1ƉV>:˄(;@é_E5R[xn2χM\A`ZE)m+fp"+dF*BT\f BW44v~).| mr'"Qz-xhFJxᅈLSWתO2s"niEB!)"gƅĝQƼ pVI%Q뀦2JvTVv.ILvG͵…Q8gkiZyH ir'4 gǽ37|ocoɥ׎~U\f7,О`攈7i&8QA#}-o4h 䙇0aT+)\8Q$:#+ D"A:mjqF\~뇓YՓy2s'??opR8hAap韞}Ӛjۛ7͍ئiUgM#{G}@nMˏφC#j׬x䅖-~&\Ѓ-(6"/xUHm,ΰ 94b؛Q/p4 /Mf%&˗kL^6}TIF sok PJ F)a_27^:s:S;搷M怰aK%p,W6 @S3p& 68V9)\SdMg^K/$u>DC` y) v/e>/&b] s1ʳ;+ሯ" *db;pbϱCwx.kP2DcuxҶ"b)>s8]zP΃lU?,yG9 \H@;åwgCĤl0 sQ!_F n]ȰM$H Sk yrMxbQ1q6(G4_츮Hkqd^LنX]]Rg7z𷩷F'VW˗?,lIʽ: 5tev{.R? ~ /MUWy!)8ˑ@:G#{OW}ۧS=Ҏ؝d)F@%xMR1 XZ()h˘QFGT$MM.FIĩ]-L%ʬbN Dsk20h]pxOKŸ~&nbϭlZf6}jg@u*en'Z  X&GV9R")vRx !$JIZ $F@Q wVEKQp&Zq6H}2՚1wT)ɖYW/@ǣ$E.}{D% q+:G9,Ye y!9}C2TDfPYP$xg=vcg=HGU[,L)XbO@:dˇ.2,wh?2tH)0+a]@/~̄n4w~Ⱎ%|I5/.xpr0e=nPa.$87c;`Vp&* JYݻܢdr`,+&$AK"@L0`}LLxcD:Fu 1P#xA༷THsB*:Qڨ^ӻ blfJ{vޱχ?>dw6* h$N@]Rd"8,[cbIFkHTZFAe91VH!dxnO e$qɍAKRML(q i)g^Qlc[>+egw8_5-)AZopLNȦ`5e{|TχV\\vhE͋+e|d8LNE AjI5$Ù$@"2Tp0P c@$Y|0eNGPTqG.!HFblFr\b(G…D&h9جnYU_a*7> G )JO 1+ /1PRFf`)]qU,b!{eJ%EѶ#Ƣ勰aU 1lJ/ug3bL}AbcWԖQ[v1صY'1h83F"5g-6f!) k\^(Y4E0Iz#)DfHZсGA$Ab*$1gQ&C@QaٌQ330MRcWDTQu!`QA58mCUTThB@K ͝"0J,&fDpU{ź)^:K]qEbkeM9ASy0;#'KI6H, mшOpQpq_ma<#@bKzϛ9rS"~siӄQL7l(sMalyGgNk[ $^TQ*GSocK=}js~s>pq}->{<-s%=D~w6j˙QFļRTc("y`e.1,B3)} 1~V`z^n}&~ÝZkN+~.~ZPk$UhQ'J͎p#Igi1u(l0`9$y-$g,q+؄@Pp0mM* XYjAk$*'gJ:zQ,*EYrPzr1q6Ѓ`5=-Ayx5'VɇͪV`X_Mvj5k`C ߷<}5GpM  kRF#`< tTTt0~iSW4R^ʼYhGGIISc#H+c )8^fduh:4M&9f.o{ZO |^ ~3ӈ-D+ɍkKL'~o{rg^ mieRަ&ǣ('ꋏ_~ſjbxfy>@`:Oo?u}^?1/o{]L꣥*d+{q~1z|P=Y#arc~ A_ybc|?j!,.>WHeu-:WwK7Si?!7iEb,OP3k@Sc*A=qV /‰ۭ7KzWQoLaUצrlXYriLψmC/r}B da֜ZsB&{D{JONf٭9vY2Ά4|fׅCuS<O|pH]]#h]*v,67y=|=' 4+"62tЬaĢ2,)ƌtl՚F{%@|*M@7 u msF4$ D(Wy.LQxp:sY)8-q{[آB l]8a*k sI .jwmmJya%@sp vg^X *2ڑɉY~-,%rj fd{ ƈZ9-1y]L.LWW lB?Sv6{nqayU;R c;T(Z]1y8O|Vfp]7Zb U4™R*1edVXn~ zxX}G>Oi3ԃq;"ŨN(Ji$)ܣBP`a1`/ºrae(,_~r Oyt9gnxtMyC'.֟pAzAisb<o_@ۗݏ&_ s5-Z |v mV|6yѾ퀴N iцM.sE"M$!jG;@]d]D,tax`ɤH$drLz1z!=3II& FҺ%Pc=]0(:pb\*Tku6U|&=A~)ӻ dl~_ⳭB=Ny9rtFw!%Pi .G$k0<&}NMxz}>gqj6ho9dgBpYx'IKgPz}rM-9>pqzq uwoJwU[TB/e~? Nj+dur-ozM!R|x+]GB.' uBheGO}oR8*횝 [)\e-~c7sO ɕ.4 PEd{)džѽv麴\LcVOnt5 ]{4o:}A;Q\[J\gs^0_FseFވ9Wɗe*tt? j|'-:|:Ǫ,d 邎%•WAYfU1q8{=RqJtg5 Ll G B,6t^F-8pLBD1t{nLkk$z3Jѱn3rv(3{Gbs"M¬I8AnoJx[zk^jrKʀ)8pxsnq@/sQiGUt/OJyYHvM(8v](3rԣ߇ NNE'0AHP\k|A䀮Rys<(9V*C`^BkY&fjjW7B8@NdՈA(Oޕ .`vn~Qyj}C|{,UqBh^@n,7GɢU #wU^tRbX4C +{{U['BF&U[A4 IJ,]޽OttZLknX>Vў65G#{).&#;w]ژHYPkGAVF[^ &\Zʞ=LaH}r]-'lJjƸJ&O2śO݅i쌿X.ڍ『NxL&݃W_S(W%pbIl-J`pG8:-8RH4 UV¹%(Z*XE@$JQa*\T$!S]''N~%OYƖc?ozjL3˭Ahst2a@r#-䠫; A" I<2Af. hi W<]&W v 8k(vMy͉>a?Ϯ$Թn>D|,2J@̔}*0 )\<\ίUȯU"+ 6R$UTXbOZI(8.r\QKQ:DPxDD `HR!0x,:`$hU(vF*9_PԇOW-&ZzXf=5 W;Xvb;19z]1^=^ 9^SNCvuǙ9MZl tDr)f h߂!呀j1>ك;*=#كh9BlK@D.b!Q5KQx_ӱIY!q(AKMѹ(X$)Q#!c,a7z8?ySI~^28k=PY3RE!e\XBN7~6cOؚ)k9-K5ϖl郺;啣 &_.9]N*Y]p7t`ܙhi g?n1Onƫӿ?[ifc_GfgWl1w}|[!2}vrIxΗʎl"϶jħ6ru]-zC@9f#<#a8Az )HϫjDd^a:Cls8akb M(Q$vZ'\dGxi˵x5Ez!gLZS B,dAXR *Kv9{:)s߃""_O_M7%BM !Xkpdچ{tmHz}v?J(MՁ-`v]O> ؿI&r+iƥ®bBۻ^["Cos ujͨu3ښ[o>xeʗil:Zjݿm^Ϸww0Ja:l鯽]@>k$* *rxpHxbʨT?Z.vm},mS]!mxquuT"ۧ{Urd${0P!ҐBiH4PR( )JC !ҐBiH4P)]E!S]iLҨ!BFX.CƷA6d|2 ߆oCƷ!ېm6d|2= ysfUvy~:i,d92]бd5Rxʺ64~wdI?mv2[mJF (sդUdN`BQd-$&#CB B,6tj^F-8pLBD W /ϙH&fc`gF_=jryWl,Rn_QUJIZ Mle Oy>-^ֲ}l$aJixQ gd (Bu}ɂS>z.R(;p@/sQiGUt/OJyY&EQwW:#gO>vJv(: B*咕$:s496> C tC NZ!wJ.*ɴ]p G|K:rtu "pޕIAb ffaʧ_{L=*8!4/ 74&1YA6فJaNj NJ fL}E~/{jQ mU}eР<:PBGֲww/0,`|.h&k]X3a)XUF{:)n[MqY:tmȎ"FmLr$,(ϵ# P +#E D׭QDTrPS@)FuBx\DQTNK RhGU-H 2&: tF˖!)>m nH%f5!Ԧ)|Ig[n<tt"tdCGkϩpъ+8QddVp hGGR.Q*+TO" F0.D*VPvI!i/PaH`Kdw<xSc<]pɣɄ}j@| N$G/&6o'w^@M#4kknF=0ƭ]l*9=. E"eG1$u(1$Qʶ zf?|ht!|M{˹714ozdX'wCӧo8ffٕ!i'Q3|B4hpeΰsyzZnT0$!qcRxHBD2BZpLYAj[\KPћIfIIQHR`R@UY6`(Ҙ(6gw(xʼn.ɗߞjE]kL?ۡL p{U`IWz@ou5ڡFTE!AIIKF#=HdH(Mm:]) O^bΉJ11X,YPr!HƬ8. ٻw qI=8u2Ib@P +XF=Z<,%A´pZ%DMPGe$6QpօZs>R4}"&ޔ(zr%Xܔc|'+*oO$^OD5U\Zo5wyZ{;TJs1q`/DY2'L d& 0($EQѸZ9H)2ZI"MI'6DVc"r-YJ AJCaj٠ I3 iƶXc! XXx.Qiek, &w2懿8 ї?2b54f'LĒ+dS.|H%kv2L l.k--b1Tg/$%TM.Z*9, v#{XX+X9z8w#6M%sWPvlڦ1j{E"G2 @ Vx =6ؕs5.!ikʾ)c_eJ^tE*#c(Dd::K8waԯU`Dl"mv@oOMVAcWXE+΂VZB$F )lmM(/No $$٣dDl:*%6g_51.κqTl1.\uGlt,͖]%@6 (fIŬAf.>. 6ӎm!nad?[|^dA/Jr,ՏrfUaWHvZa*SH?4aK!gf(#,rd̳2[kű )ԶxeRJ$Ў2Qmʊ{)9DjL .E7~\b8Ie +L4{)lZo hA)Rtݘ($vR]x*/:Ϻ6mz(`STf圁R]&oC"t2b~M9Tdޘd_I &IrvP'Q;:?Դ!10@hy!JGoyV)MUAbH(f0M3ňk ?;pA1RO=ĜcL H_(QZ%nS*c6Bx ޺,#Joգ:;1y>yxեVkWYlom8Jn)x][a^A&~2TϚ^B5W҂[?rDIZD@@kkjV;!/zv#dwGWվ0C_ vc9h֬dBd K5.=7Qsq?qoOZ}IJ|RNOKT-C$k{^D_.Y+^vpku[ZRhu?OGA.ܢݍ*q\7{{{. dV}ԁ ϏG'^tGGeT!cONE墖uOGq||?~?Ƿ?(o>-8'0ϑ'WD"~=j迿e[ ^o2myɸ-+\q̎t/dzX+sV+E6t1cG<4WXl6EclT3T0Շ~Gs~0 O׭PͶ_q?$$}%jڀHRV Ѓ *x2lRGz` vp v=SבIAU#3#Y LLRIi3Q(6’04Y٭LV:B2uYyYEn@L45OY7>S.L;ד9h\-SCNBc&6t&!=i~RJ3f{BBZ?hQ+x`(d %gbEhj ݔ}M.|r]q-z݌5x~wQ?gW$QY ŵӞ'<Aٜ}v"LW|ڟaKr) &&=&PDI^c޹]_?'Mї`t:5ge?kֻk\mnz?ǣ?gО ROT(A(|7ȴ.$ SVHVmpPiEVNkUϲ%yrglaBO~?'W1ӚwuDǿk޲2]>S 1{LJզK,A1"%G.^-F`VA׾l=b v#Sl2 cave D")æxSƔE$@A!f]6/cwۓ eK:nے"IT%ےd [bQsx_BB+/h+Hmo&n0 z pk6֢eg~JH>_uwwoۻ)}Dtj`M ojQ)q X (` 3JxCȕICO;g4^lT=ynGD҄eR󄨊Ky'iVyFYzi%{SyZ<xQwM1r8>x^IdψCr SfhJi],g9T3f}';3ɺ-66CC$P6; -6xMSNG<͞gBre@%J +΢0P"=H7%/&sN"^/zN:(ƒ Z%YɁC K[X0L9B8tzS;/R-H(i\zĔ}۴ZK3#G4%N_^-Nq_ͯMd/?7ˎIdgYqr^ xp9o&(.췿}fzr_\F)l]{ו{Z^q,Z?dz-C-Y1r7pu'k/"(kEšqic̯?+4Kԙ f[]\l~:Z߽YvuGpDMf'N:"__g-ON SQ=π}N[ v+DIB=4` ɪMHjX`LIhZ PVo)An)G>'uiAw13/l Utm+R8=󆤮:n!+wN͆}ߝ.xr~U]bM=g%"rjZ԰<#MަǢwabu_[hA/ʖ@=~{<͓[,x3ov;rA4ƐqϚ2RƔ srv̷H??ߢ l-8 ,.J5D^kP|2 !ARPef{׊+#0ӪS ZZL-Ws(".켮p:+Y}(1}Lab-#jn<_e#eG,o -AcW/_͋<&_zMG?^ezgtr4Ys{~5I'||Ik~]~V^HbS2IoM*g&2?@W'0W_4At(= ך5CśO:|,_,tn N꫿`Isp8;קd8+ׯw*Lހʹܭ;>h|]--dngz[+H%GH˫ZOmăZ#1<@LЧC,J1QL6OuZj. |q6 6X/)! Jqi8Q?RT`jzȵkoM~n~Gb.aiky)7zߴ|Pvh|wq1i.r|p{ۑeke Ŷ'"r=Vz#iHGZ=ִqdM',Y9a ST(Ol3gOc?N^Cɲ0/Q-DRBL@)LM%D Pg+HVD^numڥ2Tt`2FSPi3D5UIAEB2;610;i:̉(o-Zi @̦ Қ8:A"ݜ,%I*+ P"(Br42Ɋļ(Jf&dd!XWJfg>i$.Lp4Il6;"RF=-rMYz +UQn >pkkIkP{nٞ71ޛ &ttOz{C%*UxO[اS9UWCH*,I7>+s-S0(g^-($!`93W%Zpk%Ff썠Aog`\c1 yw>yT|7YWkbwm۪+|>Ի_Ux6*h(M}*oNXýb$kM~,W4>*c22h @J*d\6 ςְ¨v7MJ>%]Jˀч8@"3Dg2'y; f{ks}vlq<:֤5aqw6yt6dLY;w$'+3m,>|2fף3 7>ҝnUy"]5NS?[<+y/Ƨy+g{m$}\rxF[[ݝ Đg4_cr4\ggZr[ yˡE 7goƿ݋|I%|y2kxE*iYq¹d4UIm[/ebY.ꈥDyIE )r!.3pAqn0@ofbllGz,2#::Rl.9#ʟeGzYJU>-Mxtq_0 *t9NTQBp&L9V\Zoe&ީŘ)ݙ)ꖂV4$򑅄 kGrS;'P|jLK %3ފUa(I%=38D#4BXXCD\}*[6m:Pbn db,xr張SUH֗-A69G'WaqWqxg{pW X9e`"p\9a4a}=Y d$^$},{N:;DɃ3g w!Zje%h1` O- ICIAPRL>K4N )DV&nigZsOIOFmv>gǴ~/|6{yk}qJ|nr~GXcHh%х ]osc\Ǐc|N<x6<}PzE"?YrDsgeΎiFĢ:_&gK )r'ӡ$dŤyɋI@Ԃ` m{Iٻ6#H~0{Mb CꮶxHU߯zZeJtMwկ]]Ef)Ϸ8>@>;#:? h365*l$}ϛpkB<_v`UT?Jc\f'@A̜*N,S6xXuW 0 "V]TFD#bw'8$/]a H%<e{ɍF*83c[6qƌUV\&ȣ$)4hJ rOE}Yu]qUEbw6-9Hҝ'~̶td%oQGqQpq_w슇2 l݇l7"_p㵓5[$ُ5ُ1᮵ǪHf *cC#5xR#bwc=OjݟhEfl7H}))*%N2K:dѐI@Q,&5ɂ@pFF,hJqm`xƼV[ T0,,/V#gD'}j '2yZKtߣw!c(kEo/jC0I/恑1$m ,|b{ !ϽVvF{9;DFL Hc Fj#b6I{r[ J٧rX )gS%2¡ICSuQee9FΖrտ1ku.FO!Kۈ@NO#:GR԰lSH -JpO|"pgN.kD!%)(A5hȜ,Z{S}bIk12L !-#ш$-LM`Ih]zi{WadGlojĎۆ6&6j`'g)%-6zJ=?27 ?J59EKa(+ #h+gA:ڌWGlWZ&wmX o0G6I֛ %.t&]$O tmm : 6,꽎?/>j/-fX-`GߙOdz7?8Xt:FR s@VÃq4Գpy:XZv_RUύ[SR\4e )W%05>e ,>_}[.Q>/o~~h+k M㭆Nm3 f\W;L| W@:øVI~ߍY,.W mҘUzi"t2 -UM' *Bx_6 fk }LFxjߕqCٳ=N%E>ܞom|%z%K3F^8ART m}6 oǭa߳-h'Q\$LLc"B DgP AN=ү_I.;;/>K fUeyqŏ4zn+٬{Lͥ~<' Ç1M0W-BvX.RQ5(W0}=ióbIݾll}BʦӘp>?YܳyO=,?]0oI2K>p[|5Nn$(wA+)9CFT*s ޠxx@Ccpg '/(\BG# Uvj/@}7RX.=ߧÝ锸  YZgD~4N)r9="< "^޿iJRɊ &1W# .KmxO0%/d(3a}U` ĔOy"AxRhN),(EX;9;Mpj/fdǓ튳6?i麗6xE/rz:L8ʁ'YILJDd8,?B&Kң`蘬ddQk(聧LCO'!v 1]9a@O$ de\$C%}cgF/"I&IƼ x^Ir[zȗCo^D[N'-8cW,7B MӢCf?j/2j{vC@Y^E7m9ď8a g3d9r Ѳ@*wIXU2%E؄A6K9#U_Hƀ)e04m8+I{yd`C ]jօc>m3@fJ sdɿҡ/;E300}Cu=˒1G߬>-Yjc ʪʫ2 F)&6U1KW@enqɡ4GU۷/-:JK-iΤƹ6B [4hd4`̍o.c^ӝZ6Q>8݈%wOdDxC>-e`_ R&Ù1L{(RR7@T)_ _ri7^@JU $7u5 T:z/jV~3h7#ުw%iERↁjָӮAaH@a9xT+WzrŽ-bcy#}Zxg!*"LQ`K:##2 %SaGG[;&/%[ Awu S5Fc" e1B0##*^+L/a^eSo 7̦޼F6]/(=[[Wt{Sf|q_O wYFv $vV9J7|)}u&O:/ffa)p>\0~52<{I \~l7h[hkA(sr ( :+K,?R\kĝե[xP\IP%ޭЪۖ'rv6ûeL8 P{eGO A#+;Uo^a=@Ql\ofpZ2 =}"+-xej 6f3'f%Wﭷ`]% ]%p˂R;JP*udW]Q%gĮ` k[*KFJr*A)!+5FUtN*A+ɾGvG7 nͤwenyU=m ;ݢxa$',Cidox8feS9Y~s2,{nʝ-~ބ Mʃa \~0ReV}g%M?D6M8K/b۳z;p9&}?mOhV\OGPG׏rr-c1S&NwiN6kEu1/}WL ab;6Y<ߪ|5.Uޮ ShSWo_<{10bҹVpb#V.ˆ5;uMޤ\,J8 QH#*&hwQ1A٬z( p=r@*'` Z]%(13Gvpؕku@ `UWCaW Z]Ȯ4{7H ~(*A>WSQ~pI!`t%ުJLEip'fdyr<ӓuwQ_~Jm'󦜐. a74=,q(dqIg;] .\ hXRİ_:ph6[ǝ?Jk mjрm빥"XLuE1Սk_ۭ*6m\tUϝg~MB0{ L#BEM?)h ŚSkalQ1C+Y'NRҒ}J_ybb Z0eJiqϿuJ)97vy4v$kl9|>Wlu`i(6 ѠDS+8)F,՟s4M©'ÿ$ddRn 1GkaڒW׸G+Gm>a%^ ?\ ,tTZr˙DQPɈHٻduQ ;d٭`Aa&{aZ9X[!1WȺ"eK'H2uEcD2Y+냉KMhA #(HiLDeA5p6ˑOqx2|Gm[9jg8뢏 o̍ȫj7wSr}9RКU+P5>8&(J1LvGl9rem˺ʪy%} sJZE] Yܬu{]FM7[K|M8 sy6ZTyy,%凱&WJ^(ibS8U_U{6 KoeL|XH9&슒n}Eݕ|벝 uz UoQ. i_JǭpipTbŞjmZʄ2a^!X ˍQmQ'hK#P&A^kԯsߟ|=BlkvW]o <} GӂQfu%nhQ[Clٳu+¥=*I5FҎ4&k ¡Gg-Jklq_A?#kL YA@m*M`ss.2p8'JpP-+& PxQPFcGdTa-Xbɟ @F(e$Mk:뤭dt4ޏ9c&2\+CwѰ#ջSrhsps`݂"k%êOE$ o +\lZ/dREyG n^ų&!JحJ#yW{vEWtly_;76ʎDP$C]c(_jPWpl;'뗼SݮOp1[f`<@&,H\@`oY;_Jt-p%o E;!^RI?L3?Kr ZƊ@qߚ?]3?w(>f\ z Lju^!߶oEԤ+)ut=[By#Fw*uNwiwqGӦOAII Amn~7t{+D#_w޶6&\}seQjs~:-ǩ4<(3xn2Iac{o{m1s+( 1I`A"w`-R`({lO5*Cz Y@䙈Jx-ʶ'k #jl\Y$G<[{- =t=\]3rpxޓ2ܬ;f ^:jpkKNlSB"3|}D9T[EQ[n/=c;NA%j uQFP yMgRԊ @Ө +P@-Om7Jjښm\sbέ<3*=A*Y --i!2P$ F}`p+sؾ7#)TDb)&RiƝV2S B' L;p2^jupO:}&/L1qV2 FsxIE-V6ƊGc+/\>pA=8Hŭe\Ʋ !V9pDE"`m<&&?RT=545o'f*H*$z2h 'O M+\4i,ZR)-F  V&+y&_L}ظfٟ埅e0rfu==-8 ,j֫bN(^)R^4L/zͳUj4eݔu!?".-ʝaǣ>0d&(|@H'"8%1JA)kpn y7 t^WP& Q.*UUb p[߽ ab_>3B YƦ`$Eb۞Կ5 2s("U,% puצP4ZLk>GUQe5\.4uV6O50Q^x|z)F3DaO?Jr]Ƒl8:֟-vOBF*ٯ#F4 wmI_!‌],!$.AN~ZLhR!)9>D|jI<lKtMwUU4D=Xz㢣'Wm^nvU`[]V7+9JA&ec;-4h|;^SF CPMs^蟟t Ao^}7_?o^`߾뫷VPLSE&4x>i^Fj[4-ܧi]O+ohL]}ܶ $ě|5_'W4?(-W+U(G^\|&5? 3OzaVooJ. ,d ޔG6K&# ofv0ZWJK6d3jd"YCJX), {Scj_z׿M}릡/߆ EuwhO1aP@j}9V6gÀZYFn]J;L*+5k g0sϫMIO夂1^zeja!@+Vrp(y =#=HzzQsa9OV!H#{f ' ͆gg?wyy5>_΅ۗ]Mdq𳮇' qӳ={^׷z2y@y42C>XF&`vmq-TqCȜ䍖dhXr W1:!qt6 :̞l؜7@-)U%n ^kg==FӼ)vyg]=]}Wt#yh5ԯv@J:zX{\lXb2&x$ӔEyyG=2IUE9d|6fcN@, cΔB6lB7R'sҔ6?e0?iZώ3>5>:qLLZI+\K.ڗheY"$)5@6)ǒe8go5?.#eHݿZFAIдحwlp%5oɉOD^1V78h2:zNkɍ}#Fط#vОK>]G"?sS_MWp0z}/ J^8nvU{(A>(&嗇: {bZIǨ~mlhծ%~)}w;2]}0?vki0FAzPCt94X?v$$iTGD+OCjwY +aE|B]blUh4mk8m!pYoVc*~ѫs/ U_d Q̌ӹ7lܜŭ Y KV}Q}H*^^4VWg"hkUDK}H]^E9 اg7S'ҽb7h°?磜4Y̆w26<6>jHcw\ё2#"ebGq\.#7L{h`=" AzR]$g Yo9G.e⥒D9I]%ƃ|\_M=B+#ɽ;O.e-IY@W "&$i ZiFZt6eI\V2P*[1$(-SL'=)k_2K 6m-qv+jz7na.J! qI*8x,.͓K]Z%5J],4$MYPKm/Bsrom|ڎ'@d1?lR+**v1Bk'qQD waqX8e71E^5dIeUI025K,IQs-JL U*հ8 ee,+.%*6(zfT''aѸg74n0}(,s2Q 'hd!M:L j '3$.(N֜Ea"(u"0A/&h鵌$ߎY$ϗ`u)lkLjg7b0ǂfǡ*P`g)l)#ф6Yfrƒ]YiCz) $/:DAL,hH$&19|*@Nu2vv5qva q?GC_Qk~2";DܞEhxT[͜AМ4V(FRpCK⃖3$8)9p$pGL:inD!' g CmjFg.Nr^gYr(.ʸh:\pqkm;'y΅J4!/AdCڠYI$ ';\\<<6;Cx-e?kq2M5rn8%0YEAp]#KVkE?Ƣ}5Q:iUhelDFLR1lp?8?'$XZcSB_ t R(fh2cRA+Z&Bk#"T5q&hk<]I^)d=2D΍WK YX7 &N>Pic ;0IV#tRሻ,bg@/(<(o =#/$GIms2%yʀNsin$NNKL;F'xVLP^ uTHνG\{@'cXWNqȉ[Y ,&c M۔5; PTֳjg^ lA eХ}Te N-#*:KRN[fĜ 4A0;eρz~_*%Q ^8 idN?IڛBX2sԙZL0q0 ## 羅<Z2E{A)%Ik>e*1IӋ /5'<6^M>˴VBQZ/r Md&pi I"yfd杍^3ffˬ#B]w@MR)rUׄE!Y ؍erȽ[gEAN Bq[Q=Qe3W!js5`ioi19O H֐(q(.䱂M\*_!}[:윻Doz'Ka4.z;ro|˓Y|I˓E(HGw ䷟JI $b￾`WG6FnRʕpiJzJ J+=B?+IW[g}ۈh/e]"ɵGi%tg/N=Iז@o\ '{8}ћ=OmTߢ@-fAVx8Kj煀/u' %qzGV֥-;Aw[y`YظYy $mMBe1FL5{@cƕiއw1lh &\ʍ`o6 g-iSGxhG "8tXcFO>(&]u1\kYLu3j:9{(e~B?}!m~Ätqv/)9h?֧cwiٯcg-Gbĥ˒Q4BtXFPJnm\>F);`\AKkWKCvL  <ڈ`1u ytڳ9U& b&ey,E SVH<ŕL1NrjSWg70 }GΆ(ڔ!%z+[;)ھY"R 3lz&xጌȄ0Ss`R0QdT&z,["W&-}sg d%4i\dXq %G6!`79J¦Dn!eಃAϓ5JKVJ3FIpyJ .`de`\AJm>~FN@/cb0GM%2jQFL^_s4ٟ׳ ޵?q#eJ\~*UNWۤp O'dHʶ.5fP5#1"1@C7-?ets7)o_Sů^Si93͙8Fac"d|s7Y}Kk3<Ǣܕ1gWѲm[ГϣUnE}4k~!4wד߾KhŇI৛9(?ˢ/z2LGO5׃i ?oav6#Peuհ.>O3I%KJ|ϩ"g1Lj-TE%,PJ2{3ΟG,7cE3 OfW\4ʯ*)Q[sUETZZ9]ڧسÕO㕔ݾ)·o@}4:f64'Pl@TJof[\/t-]d:THmVRcqR{KeeP6yJ1Qe;4^W^XT촻_uN f?#zPZ 녎ڱ1Jz꼑>z-"K–L u&E)dJ[xʩc幯т+ nǻE\LG (.XghņyV $Hz^@/NŽkM6Mr-F.Ԣ^yrn+9C)e虢 )$m=lbP\nv] 25i#IQN)93Zl<-&zG9bK{pxPOak̊Czz m([cR;ĬW逰ZiY@SJxAlY4Dh%wːbB0ېees(ϰ"PKY4tr݅i˱ut55_ C~n? kS۾OUɃ#ܹ߇iR}[WMY+ǭAz`-4i=EC}ML{VeY )u!9g8`r|D1Ǒ9S4$rHNzCI-\%F4)gJ $VSKd(C0!B*Caj3jX$"﵌`Ԁ'R[c@*wOtͧCecvtoˀ;p({_ԙjRu ^x2/l1_nr#YBOLRJ%7t/ T:/KSGcoڷj7uzW+[\+Yn~nW!GTMj.OXWk$K\4Qtd:ovLo>!Yuh?m/|IJqzE^ͮ *v6*Sʫl?EK:nђ^XW3tL5hd OTp9j  1dAk$N0ipEDttAxu+6?nM'Ae&͈5i4HJ2H2͂"Js 'SLwm7Im?m/a" LRaK#drPf9J2`b"1EGQHPɎ_+_r$ϋ=NRgtnM@iN S3mQ@'O w6k2⣼̽+nq_+q>)np09-:@A}k^}꽗r;Mꃃ IUyå O=c)\<3`NLj9+j$܎}{鮚פ*o#xחt3}};GUTtb٧|XRқ| .}wЊO(>'0c7t>5GW &]+@ ^!]q^mTnMBH+Q y}M,4ɲF@?~ox^Л)hn 1<#3Lϙ>#ΐ?WOwwM)il{U߲rбӏn| KI0HRZ} &Efɂa}<>mȇjH>lAӨJnG"V3o`>>/=I9 l< r8QLJJahU1۔+(m6qZ|T n:Sj4bD/l>F6w@ .\!{d'ƶOpmh1]JB+~v0(WeOx=JzHWR0.I pr]%+@+:]%ҕR#J|sWZy@/5ҕku*q\ͥtPJqCWtǡ+*VC?_ťfC61YK|.]]~Jx5YS‘|TK-ˣ4g<\rjVaC>NoL>^ W<^h{PJ-n{p3=rCTBxSTQ?Nrt#WHwJʏt *!iA`BGX%O >%HWRLEzDWLpeo*՝׮%HW8bwE 9O@4m7wݣDis]r\s,O'xv?m {kxui5BcgmIHϵ,wJÃ&G 2a !1;Wƶ|GiJ푊 W؂hKENpyoKZyPr|܀|*В)#iE?=tJq܀|tfOJp) ]%\vJ)t Jc]`-c &7t1EtPvr8*Ձg׮K~h9}e!t+vCc-]`Eeo*/t*u#]B"h'""zCW "}+@1:]%]vBWLd {CW nVUB葮^#]q!]q]\u]:ջ;C_I퓔2U&oσHtSRɗ uiLۊMmE)x7Y>Bz8L.qS3g [ `AOM[P"o^.31n0mx;͊MReX N7ҽu-niןN"lIYٚ|Q\Cy+x`U= 5|Xv< z- 󰛂p]2W-z ]!:GDomⷺS2fr>$NQ>4s!LE P=XXE6`KvG +dzso6v^A" ;hL|i&e?+Vyi>eA!L" w ˱A aϵԆ+o&XUk+1t!6Rd<N4z<%BAY }9VM9HÝ}h`"{&CDisRH-!&"Ŝ͆AFL+bM2H(QHYuFZ`04f,`ZFL&Z ihD ]֔+)T&Ys-BF0FyyȘS i035I- n|AAda :fNhj[f9RaDcfٿ 74{ M%a뱄 %$P&A:1[aa]JY`aVqj|JF4L hBLZjRD,aK#f^dh\Xo2RRZZ[r ,BY˴OIлf#f 8ϡK3hHBQ 5a$msU|4zh4 ;@QAsg9:,}}M|j`@C Ӓy`X  I/Hv`e)A@%(^#|VEG!/HM8H#P`L+ EHjb#%X]m &HX/-| tZ?{ǑlN.w1o.f0|-X&ۋDzn%ܕ X֩/NT2K<=7;Aka&xf|i v<,%u$8QpyJhr*%@qʋu&6ajcmP)XzV©%mƺErs,D=tpT!]AT%;~Rሢt9P [K1fjpѮbT6z78(bo8*?AMj=% ()vy5qy;LbM-PF.' 8T A I6S,`dAQi ;8MLj =&bO0ɟl丱vBPaw^KC`;14d1a$A3.KHL64"T[ R\`AOŚGYN֎I\ D5v)֚Ā;cJB% b [fT&g_`kf7ޛZhSq: G%>QQNEWgL3f)Z6 mߵ@aRACCw Jq&D{Y>)"F$Ϗ *+=lo,$/乞( ʻs` Pe s$&=a]1ƇIs= &tȠL!riPujb4db6SFq|1N@TܣbE#&d1a=L|a_߼?޾}{'W0*V=V~d?C1Yx9eP + ':pp4L$p-8MJNg*NiZmUf*sX|LC˓]@MrDEg`^@hz+4x"Q*LԴFb}Rq;&T|Z,Ey9 {:4%N 5w뫜x ;2x񥏉EWda:\?3yMgU;6\0m;V+/0I NL?-e߳ݿ>Pg,_:Y8Rm* 'Xk'؈P&zwiT2>#Y4 QL9B/|\Ҍ ps3k`)``2ݏpΞJp9N:Z.Nƈwl }Pt ˘ V(USbIL[|V cK3&s!Fσ/$Lh"rQ>)nŢ8LBfMdM5r'Xt{PZzw7v =\o~D,:jR*v6AX0lglPBЌډpk̭rVtv4; 43o;3ZZٲzk{%~>)2YK،T jfXJDiCx'-D׻3(|3n޽O ҙL챀`9ϰnJ\i OW:Xs-pr XS\mq̦'_csXi&w =Y\Aִd]{&=ډd7\0Uk!?׾Lў>[^Js]N.CNr3A-Y.g߇ X3:m gXCYXk&vX&Zt F2F9, =KY O`:qKنRZ)nIr80)q8뜝TL˥)D,0TKEu,&nRc[MH B]pZZ U1$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I Mm7 DrGy$YK $ujhI;4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@Jzk}H>^}s#[M 8{{wn_I5}:@J 7aKr(>Mc~XG%qex}'q*& GPIUa5kweOoNǟuרV6^'?c§~w8aVn>,Ro0w7 㡲Nn¸ķ~ꃹz__- +$p*KzcRgwT! O_}?=V?./tżpa-~%Wbiaviˮ #^6M3O޽ M07]Es 073@ORGQ&gF$8ݗ:B 3`0V4ZWnE%Gut!hFU8W"yo~9DeL *r;D0ap%ryw%jc^;De6 *ˎ8 8[? DQp%j׎+QYq9|eL`{y\-<_Wr,MvHe+3 pŊOzkM W(o0 DmkT&&qdHJqܕT\WsׂJT[4+< 8^2F%v\#ǪȦ@" ap%r+Qq>M0Ʈ8U?m$d`q;G+oL0a0 l(׎iQbzv\x WAHNEc3Op{sw?&,_?̅}B4Wvm>|? k2k&NǧMnm-e?.92kJn /dwA~ ^c|3D0ً ~?"]h#"tU*.UT\}9@n? 7< D^ֹLeȊ ŜmW"8ap%rW.cq%*mT\mW>`M4 |ƮI-80ԾLm4kT棭wM:6 A&_;T`z*#,˧0Y355'5Y r$?+jyzr9dTpEnY(6i \vK'vQDW\mW)@ +_s>^+QIbp9J!?VcėvW FYRe*p*)>uTq%W֌+Qq%*9)6+g{)v\ZkǕ <@@ r#Qp *ϊ J{ GKnڸzwq&ېMpI|0%-' &g@MvLʬM&1I[}R9|(/ Z4lHGvo {>u3DZyy"(6OzRPd35W.[JA-8n9Q^в丂9z=_В=p`ú=},d2^y,`{F Ɍs{"CcZEJZ ZǑpȍ4 ~9k[nFGK⦈ l6c}$`q"$< Dŋ(P*j<*@eL "<WFKׂ#02pUĕTHZ?_RԮwWeG^yJpeatK \iAcWZKs:k}dI{ڗUR|W/93 3~pE7 7~}Frz嶇?ʇ,ٜQ`ɒS4(IRJ?Y5-? S0THaqk,yDe돟[>~ݦ±w> qĿ .>Ƴ6|? fDD M%H~[\g<(O]VGfxyqah6E :0K,LƦsQLQ3ˌ%PK!{4?BFbDԏSbo;8g E+/d:DPpCGu)UҘr 81uaVZ(e?:.8Z(b~i<bwBh^vOo._u&ݰ{<{U˅~r[$ņYVxʵXܦubrzDq6T_n-m^6`2$$NAEmdƫ).V!4eLU Dkyt{XAFOX睴ili0D&WFk4J'&霹,тc\+52Fflk,KXR_Sq!e/Wip6reBc)E8?;[5u跏M~noi029_ x5 ݾ!zrW(+Y?ҍEg;aw;Zݬ47,q.؇op?fenp~vE"-T8S >%w{#Gթ|1?M K(.JhE<ƛF#bpǙM&ol=vyۃd9m%GeLVF$O%Xb2H .qgzkXaT`IɧU`K p0H\dL$9x'SA j۫g}-áfltqklv VS6/ӵߣߏ') O.W=L +dq-V)M _KĠc.꧱i*PiKfVz{ԝJe6WU~/liT\[v[~SCݕWeQJ0ԷAzh>:O~f+,g?Q|Yl++2TEE0|Z*Ӭٮ=ŀ)d|?&bP0:^CҘ]]7s-}zfs_:|sǝ=leJL_WUEZ۞W]߻LsBKvte`ͼ‹ kuUҲ74AClrfJ'K먖"p@kQ K,i uDacF .Dey.0( h3Fujl0giDyG,cJe;@|B+;6caۺE€KJ89NTQBp&L9B.>%ĜSb ԷX "9"LT.h@r"ĘA',҈3a)D6+JnbT8pXyelfitKA+s`I"YHȬ&{IxY!7>8Ӏ%vo>S҃H/]›T2k)3#C<ҧ]gc-g^.{+y U+ًj{$t^<(h`F'B4P@n` T`il3-<<0yduAˣ.|IƖauT0Gn"p\3WVgUcs3q"B;Ӄ"7z4~_,El^6QƯ߼*ϣM[HBrv-CK&wSV:qK c .*C̞E2md]K:ԘWg fg f^g*]ԥmpQ}! {-F"kW8aIȝz$OUȆ V~!L쑓q6zIG[x`m;s]ڞ y*^'ম[C$hMN>1A+{g) sH@Fz5i$9qunߙǩ{omv.0 PG>It/ފ'0mlhPwM@ p=Ӯ˃y92-QV̹ "`=ʌ Ӂ8p=CŅJ ) 2%ݝNRt ݦ 9 ~,Nӽr3:_d&ӌ$ :_6?HΖ-Fu7:f6:-Xv1ley"/ȋuPa6蠖8 5>-MG$jzrI૬I9! A?kb5UQy#)! RftKd.AdB1КdaVPVq2/E̐yeu.a &H$&HE|Er2Vg7NbmAjq("ʈ;DqkD&^Էy ˗t^X4wkm<!g:F_7rƴ򍓂͠ h&ZĮq;7凨KÉI99Pu"1uxx])N &GIetNbJcJ⠗6E[hRN z?/aG$Q ;mOAs(r?cBPVKFXhVcP+12D25A i.%*(.2)sCkrķQxr3eM mvېC6f[$\F+ :r6 F5A&ՉxgWk m9'NΜQjEn*| d[ 9`7Z/$Kq(!"#$<.1TDR6( 0S㐛JoX::a 9urD0ru$J $r< 6`A|~E t;Gh&>Cb82{xT4$7y-^yI¡7p<ݎ`Mӻ5WoGwK<+뭨dt%5Q.jHYPrV[s,|#6MY=w:;,^(%jIm6c:ȋay<ϘȺaߵW^~RbٝwuT߼DӠ?|f< go__T$T{h1NXe]K^3wGg\Hxl{_چK>(ОjͩO42J=y#cBĕkղu6-+Ӵ !Eq!q(Ø7A`Zdrau­lYAA$;\K͓z cxuqj)%)hG[Lݒul5ٻ){_7Y, k9,eO 8S].E\/w?YqjERjMtW/J%m'lg aL K5?'h'rFB8=Ŵy _%;/O'مgl 1V ~h:.ueq o&3daD'7$7u6wcnVe6`X'<ي@W>{ǽUg^Gnuӷs}KtGX":I0Yeti uR]RSIG #W%'R1J 5\27^:@:>ҷ^l<7W4 qrcDd\Y "eNCKlp>Zepf\ȞnE7$J&;;}Dᅏαk`Uex/QjUa;IndU@5uNhUɌpėyVjk“.N~8ttН[D87BE}"֣3\zh ZjI`MA4Qpxu.p $GmmT%@HZcLH{kk5F!:I?/+̓D׿.)+.(7n= ` {aVqvg'tnV%8!DxtO !hĀXZ-EaNe  qԧn.n/DU|ɠQ3v+}afGw9LJl?7v1"|4X =h୍c!TGąkCέP׭,"/#(g_c=9 ;*>cl?P/t8ȵg/.hl>4~lXɴ>~du >1Ƽ&Xo8#b]jP+PӉ@CmeJ[HuOJT.#oE ףӪ0ż$Y&M| N:Z,GL!+KwtM7AwۍF쎎݈ݨ^ x0\FłByNGMe3$DɬqD&il>fݘ`G7A0-d85>Km*G $\2MpG}x&k5h˻WWV1(AYIiH~qʅ\~ȢDˌMNy=%LFUQhH$ȈE6j͙DLIPm_mW&Kfn9nizxAH$kRF#(`٨zZ44(K(D)(t''T9F;P@p뜥jF4O.3^8.\R{߽Գ) X!)M5 V z J9A[tN%RߘdZXC 2-h8d0tQ_zza !Ԅ^*Z5j& ͢R^WQ(5[X ^ib̺`G7@[&rqR뜆<ܤ?~\`oUżVǙDT׿|Y-2z}\9rgfZO>.~ŋsKg^ miiR&Gߍa1NW{188"::aSoxo[wݓσ7-z8ɸ[,@d|j54eNXNOO?̛|~?c.m;VKhy?Cwn%2w:ƝI>?$i=wprriN=^R7Xt1rٺ~aIt~W#:>[;hg+K O^nt`w)RK []gAq-؞W'-׮f5Ym' cϜvKX/7'uIWe&n =zU> 9΋Y>2s~.hx\dMsep؝?@&>9L],i{hn#]޵cٿ"a~fwҽ`v/3i4%G'F*I%۲-dUERE璗$MB"4GT|:!(=:߅\ V8{ utW5\NKp]+CCBM%(٥>FҐP(FQ%t})JR XK:<; k ' "$mHcO[3Wϩ\T pۛ2&QseFiAqA5t6Mvu0QIilmz"ڗ>LANݖ"O *E+乞p%]ۨ$@eCrAf ^Pymg24e105ͳHCsă,z+ D4®[xkqpyt퇺X׎QSk 0G-ꌌ( $fwUhr`?Z0KQ~yH,ְ_,#,usq/ٛR\O%@a΂{2&ۜW_|-o8[[ܰ\+3_;^y1_F,*jqw߆Ѵ >1Di.O('=x sLƷ˓^쏝^ik]%\IBW M+@1ꛡ+ªQ,j+T#vpj+ v(j]- wtkc%&-U+ZCW BMR㎮NR=tJh=0?]%ututE1KWUBD*3xt0gR0g5trJh53JA::E+O+ոQ֜G۔Q7~]%p#(J8$x>D߽MƳ"_Xg#'H@V_Qû=4V W4ՇݟJڍٝ$M.ga(Ԍ]:xM&9sxQ՝ 8(t< 0ťxg▱٥$.) y4߯-лƘclr3[b_)P|n@7AKbaKecihSVx3S,`3,:2}jY ZoJ^r쬅Z $c,I9LMSTiz7V5R1խPDP2ITl>]`"G]%LZxgP2$::Abc&uTl ]%5tjtՅ]]qLnB8?]>cX"B|㚘-MDa60` nۘd!ѩyb$nQYgzKȠ1 qaw2z e,scRU\]m3~<)Ფl|;JRu&0e0꽛76ӏ1!8F`)]n僖olsY\Nbvr]V*zeYԞfP/Xc!Ddz6GL%(c]7ּ5rL"pZ*.pJ')pd2/WVU*g٧o|E…|@bA~#Mϯ>O$bp&ce1-95Д7y҆k/& 0ې/>o֍s 䞉ny2{,|~wӢ C;{?p[#b _WC@v3UaYi)mSQ6:^f&lܿ($ X63 :"[K<(0g-AEen!bn[v2hM (:!I%iㅑ@l2CP quZă#<`*=2d^{^#S 7OnÁE g`iA+w”Rvz|hBeuϞ0Wp~]tკ]-3&$%P j鲔k/eq;Im4!tk/eO L6+]L fX rGI0hQL Z rNeŐ=U))HZRO} \K kYm68Ϸ k)i)82R *Mrs.2П8h9 zȡZ3-&%!^wmAml6LVxPWe9d9^I"U܃'~S}si䮪;?8Os}B @'zޏ̰@덝O&f8L_?B/&`vw>eJ,>@@FW4 "4h8b4M͆jn:O}HzP0JVN"ޅY P¼Y/|l3Dx?He%H>c$X0-\Tʂ:/_ȸбD],0HၱiT ʤ!AmP4̾k'>L.zhOhpc>wxX s]X%{7&a[ ^#1JBLD`A"w`-R  6R+ZGP0vY4a#A2ߜ !5H>1T|z\$=A¼^+bC4܆WoSck$[3_L"4*WZ[lvYbr8 2Hl\z^(č%]lة##::ZGf>QQL!(YJL&0"}':qt\qD&\3vH *0 5ZfŚSkalQxKp3" P!k/%%\2N@XY#4)JjGy^'Nҷ F 2Nj_6AKr.b{a4S7^y~YcF#+`+"D'i2 "D5Zo޾,M4\ݧ/synEm ^b]0zNG 1<~Z2R%Q$O]\GW%訴&d;1^V5E'a,;; сNX df"*cuy%RU@֧*"L+<hfꁡrH# be}0z)# 0+HHc"5 0Jvbn&w 4yip"(bsD;G:QM[PdJ;|xh y&E9IՎ s GTvL[,UW߼eZ6lA1Ip,j&u2tOE]ug]ԗ-z$ߕ@R#T_r6sS}|㻪qjQYJ-D|+9{z)9T{x!- ڡ67ڻ7z6^%'lMowpcr'WlhvS[{L,c05%aYX5uƒkÎgs-KSSٵhmɅLVvم)Ɍ/fn4(7_<DZ3uf`<i-tPc 92 F̹LOX"aAcs$ ;Sj{#ʼ" tUMpq&+_`K#ɲF)kdlK=Mt7ɱDХ$j,jyWMTN̄VRɈI[Z&JjX rr9\M-}{!{";n@NtsW;j,)7/t mU QkR^ֵBd Ŕ[NS:*$qm< ®Fh#xhH1Zb]蠗8*= -<:e s,zz¿7e%!p^>tz|IVZ>7AI[osV4!QfJJ#K1JlXTt*.ʗUpc&UUkc$1D#Vȥ˰ x߭́kYkgsꍌ٢r7 iƮXh;c!LXXx%QycK.%I\~X"y!u}`hRBA |)oζ >h`ţA-{:]# mK{O{/q(~tQpq9\^:{iɮ;bpq{[4KGZ]b1_5>$ ,I|65 <@IiG}nڱ+xvG|Xx#>e}.7N{" n~|Q&pϦѻk~.ޛ:{&%w]Xw}`k\&KYA4\UF14n)ɇ1GV̂0F9e.\ur5mIgyurv<<bC4X6u% NXg1B7 >Gj-hy)>GeOpm>Wau'.#'`y5LAi ~R'QEkMMPE041 -h b^f[auʤk &CkHp\ʱ9@pzZq NZUlިc~ƹݪ}exsyM_W]A[  郫o-66wlr0/ga^_yvig-! !˛3<E}=/:YB(@y(Tx<"/|G^,W/( G8[ѬZ\bJ PZ݉ŵ+q/z -㭇h͸-o]t;aA/Kfa7xSIo }vdX>|vŏ*v;ܞGs$8Q'>&G>4oey"dwrmauvzT;$lHġ6EC %F'c!9Xswt{M,gb.n_ovtZϤn>_nԮKaBcԺ(Jt5SȫH@Ŗ=%:Bu}nbG hUf!Ϯ ʼ{D##Z 4{ȋJks{1b6Ghlz_'"7MO^/{}B` <_}6+9>s/ ?\*.uP7P/uc9VSDRY~FқMtgS I v 7) ҃F߳zˡ,ѻeϠEjP8!Zr!}vZotwzb#v'YhGTe`.ySlAkT1$c"cr&Ŭ|5*a }5佅X UtD\Bk|&yu$IŜJeb2T}Z"]h] m7;i>q!/ιv/u:LA&Z&"% %լudahS:fg1Q)Uk-u1'IkR֙ɕsk,&V:c\ŨjQ-ua g}v|vr2`=[z ncfb =Q ̃gIBZQKEI%ӠȖCCDC 5U8yޣ ( gq k-I"dW+d SVMf]2ZnN[- SbS1Q'mP'Íuv:X)j"c=$jhƨ1RJ6>Nn[uɑo $wAr-9Gb rVC^8c&񄂳ڕy堘<-<8)}[8q4ǻyM_9u>.\v)l~e_ǻzbWi5kœT^)& ԺLEy-Qv`*,RslP}.P)){TZM_lHQID0I' ]6bw(big Υw8 ';,Ѷx&_+W.j#v_Yfp9=7x~VlNƁVBBHhrQU10h]@(㫅ZujH[𜜭i3!Ϸ<.2!*Z(5?{׶Hr\'ޞ̺Mv؊Xѡ+ R_S@/M\\$fctTU̺d"'A|J%h]a!xY3 4w=Ig)#3k$$A J5VZiuf\ ^%eGM,(\Y:1МurƎz G MO!Mӷri\(,E(N8i!a,.+-QqfpS TbDN4=JuqcTE>WeRR/?Ok4^pR'\#Ҥ+Ns 7 a Oy >*UU`6cBpn{,C3"0ːI)ftܜu9o=֯jz.-Z;IKbK%`D4f{ ?- h0Xs7.nS_GzbjP,Op_|s ʻ3ڬ4ЃN%zw^AmGO~]cُ]~ۣF1=~ja>] ^\0͈jɝFݱ7:)d2a75=Ӣ,OjopG,Xc&v|@mr%9N3saxHV(. BNsT,MUq i2"آ㘢cpE"] 3Rn5{yl8Mh-D`meDCr:yV%l(svyҿ>W4=vћ,V5G2%$K `9 -I UfCi%B$oCp>H)!K:<9!J8:]v.fd ζ 2nvsKY+K@Amns+>9(,{.Nj5U \l7y0+~g åX(J<4c"('] U* udU> £ Ȏ J  ggso41`u@ExT oem+jExJ"kj;6tR5 ˏE/uNGifNGXVеkwm]wUiT[qƻڽX~8M:=ڻ+堭1NL%(3Uj{!&{6ٳXv67ȴO-ٮw6Vl`ᏑEV .^<A"ۙTDz.:KE>ܱ""VĴU[&s`&QY&ΈT2+%Dt$YY=ڠ-%btUN/%Gs 1z\"goj ب^ Άk n~0/n  !|f}ɯr޻פXИX&9frܛ5揞^^ϟ.tm Hu9B2dTSm>S8Ӱld7%Y۷}{=1R-rp5>i[z@!m7PΧ+Y6^9,\i3nEs孛o>iOCo_> J "MlVX?U׊CY5nhUc$鎫f՘7zD0oNWfvpɿ-]m鷡P@W]vz%y@tN͡"Sj(8Tw+eiJRGtEk t[y0t:q(t$ԾUC)HW@i IN0ઃܾUCG+-;{uZ(wu/'ZY .Zx6W:UXBEɿ 9釓֌lNN҉P|r Oqwo2Dw^ɼb.'p۵Ħ/'bYx' ]^]뫳t~_~J:-25Coߘ>ֆ 3'.'uvQR+gWIBB~'-P#ټ`})E;;]sժ)e-hM&gؿh8Jhwsw-d)ƮY[]rN]_2V=ˑ$3 LbtKC5 ʕpfOWpf\ZgRG++7w{u0tJ ]5۠J}AW|KR뵻ӷd%0E9#kdOR@sC 0fH0!p]*( H뒣 SX*,eUx464i6cea[Mf/xtXkZ=詃A δ`{AvT1pD9p,Y˃jMn]9 BH2UDUyX(e DžehCLWΖUϘcr\L/Sq X@pj maƕYksQzС /Z%H@ri+w%Ã`KF(TJC`\wvLVSՙJc% N@;0$ կdy6u.~L +s`ꋐAV4~K^.SWb,pBnPfIR8 On'j0Qfm q+)f a R3&Bh°#81HWmrG{ۋ·Ɖ1bEjIJB:}l3A&g[fjlIe,xT29AVSM LPm`-h^b@uK< ,k/}\smV28-\L.-Uzpc2Bg` lm+8*Xra #<(9d¦eU ZLt~,[Ѹ<{ *eu29xZ&h >z,:;' wW >ONYiQXYĶ )1ⲒB" ;> VvؽTvQGz`'SA[t5`QEh_b_p f,@zKiCz ,܋Q aM B,>P5RmgD \ 1AgBoi/Q$`Ȧ&Y ږ]G=h}y/!^͊z {L-͖2Ғ&@%{Q-B7ZK'A3nGb0SB]JKQW,jWmkY+zZ(U[HK!-=b4v*!vRN%Jڰ; 3b[d{8)#D"6MGՈK=Ivn=i(WEw5F*D=2HeԠcQz H-M&ŦM+RqhE o+Pp* NH-l /XY*z.$bJa"[s|uiPj:!B&TA'ԤQB0.x pixXoCnmLeUkjMw4DX jDpT`.oGR,A0J5OK4]gF AzgRpD N}GCGT/b.Vyo+%?.0^EDVSxeejwE/_rgOv~h|^F"C_F;QtsiaM?ntQl7an71Up90 R|@+q C_TjNCt!  N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@r@qlVN k28 h8'ԺH*b'!:R@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; tN o99"qAT8V;@B; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@QHNN d\kqAwAe:@'P@̤4; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@pj ɆW?N/4Ք^'o&nZϻˋM AmF%VRdc\\KVHl\:ҍG$bW,MW$\pSogx\JW_ = i{ Q=jϺGp5LmV6\YC^JGA|pErW6WP)c\ h^JV`mC6">\(ǎ+40WZimF"1f+U"+RqE* '+!#\`O2Hr]W6T:@\Yi {E~8<h~\ɣE\ mdMoI7d}f*{bГ? 7y7O&g 9Gl>c ) )&"$qV#ï[YkywkUH}N4g_/ d=+Id@šOW.WysuRn+E1mml,^}q'MZ┫_5<~˶"_haѩ^xtjƐſhi SXTEhQo7z22 8MAre6OHc2H{8e8 rl"R?Wkk볚D3L+'s˙ԪTȸ:@\ '"+RkqE*-?:D\E#u)`cDrW֏~$ /WU1Rj`] k/U{0nd\9C:#\nlpErW֎W+JY!F0a`TU2\ZƎ+Ri"q6#x28Lp rk0:0WzbF`)I;1\ZƎ+RiW++K4:3()V/ʀV/?-PmfH%(ϫ_@L6{6s0z o۞U D)TiHۮ?\ɟumv:>ZW)+bB6udYSlR3l.9ߤRCL͜2ɵ<#AWPς:D\y; Hg(\}4LeqW$]`rF1t7qu8vMW$8+5XbHsRɓ6 \{V?> 9Ko\ 0fOҍ,pWzd>#\Ap6\}/2LcU22W*bA4* H+Ri qC#x28LwF H7v\ҌBMF"!{Hnp j}tE*%GW++ݭODj܉pGZ9 V ߀VN|4t&{|dJFI hV(ʀ LrI6*5(F+GeE W_1l">+Rq}W#tWPo9JW(*$!$7f+5ʌWhpϪGBg@`G=?!)ҍ WaU/#Xiϱ̮pErWSǂT*W+}5 v6\}߻V ;v\ʱL,22K*\') r9P"$v"2WV:xgw+;ĆECz׭L9a{ In&g;m*c1Ut1Nm5 {;|7pOkP%cѿO'mNAᒀw6kp=1v>J'Bt%`(OVio7*m8 .;DGkzӍ~wnt]MWލuGq|z~N=' &$7djwùy|W@'JWRCUƅ&B߻z E."ڌWlCUAڜ WlV~=ۖG*޳0o0Hp0rϸV)Ҍ*2ZuF`d6l. ǎ+2W9u" Vf+kH3cquVjH &7f]AY W+cM鱸Q2Hr 0WZ!8H\E=};?WtWP;He0/WZܯ8=j %@q6+K{Lgpt^ոn.5[ջB5oqz4ѬVݢ vݓ.l9/N)n%Ŷun \~իo^uaPPҿOW.f~8)ʦ6,Sʨj/>ՙ y30x;2Gw Q/n:x[>NXgפNxۛWpa} ]7cw (;[V 닋[7IK,YE}opSǫknV>|1ns5CL٬X!k\SJ)Ml2Vu(Ku!(cekB+)9{ mOE+$"ƳrywOwHhޔ:Zs ҡgmMjԔN>j\+Ut4)pVbzWg9.^N;ծNI&]% I+gK>RI_IkTkՖF"q'ޙ?ۨk~A  ;z1%r[q]x M7ru1 GR#)_>_;>9tXMER_4{v_Ms_ݚ25;|=lQ7l&5r?{WFd S6LEt̡;чl $leIi{7Q$%-Jj JrMUY{@~mNN2WggcŰAtuR$љv+BW!|)^WCx#4>`M[Ws[:Ye}w>v0r v-d@9%yM7^-.'T"vZN~֙* GVn`'zvf03a~/יQ6KWM,/>bcGٔߑݝg7(v{,a~+p.MKxP"'Wq7t3祱/#>P9[7".QP֤J\5)z|'y|l'Ir,l ۯC9c}O\)9>R00euM2%l{ɬ*E5&\;,+Oٸ.-[UرuRq2F08$ĺTvnܓq̖2KRv!*P<m A'ۜu$DQy`K)?rrrB6ojltgVɅHYǚnL@BORV´UjJBƹi@Ӓ`ch՗ecoTP"\EQL/Bf!koLI*6Glc _Ln[c6$t>oY i9Ъē^ˁAٺη\$Xj<8nU;[[L1*7Ԗ25R[.>Ǜsg5lA .=ThBR丷tPp ]WXSJ&FT"*d8XOWj cH@7quHOVO7CGpx?~SL FLOQ~NvIN3:z&t3GCGG/}Fp d ^+ѕy7^WAh %S*z@vsO<m}596K|?Oi7tIx'),zK,UeSe6:UPTLQzjgbJiFҰ'nKrn3y&~͘ft/WBl4|i;P`bh~%y2(th} Le'iǵZ3V;V;+&*IYyqOViS2*Nio2rntVk1G"iT˘udUWR+%%M(.@8X_)E{P\}V?\,dK|ZώRkyy M-k/>-mG w7r{)o.褌ӲO+KY,[^}C>pztwwk7ee^׳?V übK(_O+yP;q=PwTL)nJg< ;UKf*l&˲buC qfH.F<!}^__s 1_.p<hUbKXGH)%窵b ЈgU!b }I'66EFSLXIH^bCpٛ$FUJn g;8;[}I˗^}؛B7sc+ffLRu:(84siiX02b`eN +V*ff3{D٣ $j'fTXjUd(#u$1(F]qNܡq)L\Tm95 V}շ6hrK1 DAAA;_8wcO5Jc\d/?] ?~Z_]3|& }Цq*kԓN|z/=sC{̬O;w_~[M`/}NNξ?}.{Z@ +l : VB%_)(޹hdfKx[-bB_mV^C#{2b4>d R( SbkSs`bפֿ^P̩xnqv`ox8b֬\0D rT QȚN֔pAnr  |PXe,bz@R]QYH5;찛8wÑo58M?GG=0[OU @,y=zF{`5"քZ)pcvbsX}Ц+R Ѳgغ%mʽ%vC߳ 5_\Zg/-9/~1~q5K'JZ]bEߡ25>~FPp64{0qiǡ;C<> ?s䝫/b%gڌaĚg~lJ7|:<K'(@~ vj3Q/ [XG0{A0Ӊ2GcZ};CNۑ1"Vr  ;M>k"H 5{X3V5z`2$J-gbK^}> O}'胣6؇gLKKfҦ҉QY$ jPlty@e0DefT7gƨL:HQekM1.kڕPDٲ22w# 2'R&k䮀N BVs.>j?đP),V(ĹGv)1(׃AMcAgՖj+9[K3Yo|p/oQ/otH,D,ꗝ.\AK!Jb5W2D]gfV7YlgoYI@IfLp󠘫/m 'r&Rj2 xTZu-8("xq E+zdQl6*lgĹv+0QQݦr)l>&#qQ@D񤮤(ѹ AU VA]|h6&QO?cSk9'L&tT6Ph?ۈ=2 A:ۙ'=/~v+r&(1lunPTt2e:qT9:!ڜqtq./ߍ3Mrf1d*5XeDɸ3ӡ*JW*fx^""}`ߗAz2!ȡ׹He=ê%Yl*{k"8Q !)%UR!UG䜙QlGc*!MC:̏y&֜Ng@%xɰMx(#h'ۀ9䙂M=ag&.rk#(az|_0i#keI+Q2H~eASkgmcM裟ohӟ6r`-JGF'nͥJ_3_kӶj|!SG ۞uG|Xhs1jہڞ]3;˵=X./R{Ek?k/޾Y6.z) O~[z-7yHvb5C,pT5bH8˽ $Qsܞ;Z̺k9gmWfy?up4hjXQKKֽ3 )KRʢl Ћ;;sf3=λ(hZ@Өň`-u+¥"y$,(dJ;f mRk rRT8;f -g{,6AǶ\й[3*=A*JHKW R*CER8 K5S_u wy }8$EHwD@4:&,fEwZI%PD0@AP(@2X! i(B-$t<>}h&ڈ8p+XJm.(=be&pQ=RIп*?Ѵ !1 bJZ %ʸHeA(BrR9%T*B ,R!KHqQ(MȨ J.i,L)B`xP1EIƂ. Vhk59#Bx"؝gcdB4*a() k'Lπ7C2Rb 9ll>ӵXtc,:;ƛ2G4Gow#@쏨*Ev*pTK2?t0-Ѥ6q35|$Ƭ@\ltgfɤN]槱m~0əG<`][q {tW9<-I(1yj \]3ӯ3ojG@${2*="F<G`RUS>({!! _4FD(8۴X14q$^beqWGtI[ekt"oS.;F3}veczuK i}05u恓J3RBuiZ6ܴ=G_wx]gx @ }SyCMKyU…o~w܁UFx0AxfnQ;WZo>y0/n^n>2H{LƎjZmGE!-A6=ëDUI j{3`ŝq_ʊRÊqJӬ\{r`S7:ʶ bcy#}Zxg!*"LQ`K:ViɔGzҋѼܳ E"0."Xc*,d0& Q]. <2H+́xUp%VXw1Q.{^a[ؑ}CQJѷI'ʊ xʅ_.dxufGI:Q+R3EA`w1yH~@]y]Vki}re{1J{0C C!fH ԂLC/g)Oc.*5b] ;\'-on/gaF#_xT>"At@~J\:ʲo"W\޽ywm}7C!%96ݜVNm{WsFӞa޷-gVJ>lIZL !̲9񧃮=ݝ *sйs5w2֬Twg>3G=ݙ勸;Sr N14c!xHH!F 6FGl`^DJ3pwߏLFoqTHB;$#[1%RoH~]܂"4FƐ:>zT!$UÍ^yނf3] p~A4[+M1RZmЀ;kdE2Rɱ4"0=c Օ6 _h%\!5"%(X!9(uQ\G$QH@P_/_2V堂U= #=įJ"̪|r:5>8O.3mkśp1.%m}1& '4 e\Coː]u$|Y` f4VIʬ0ɵJTiPT@YSڈ}Rn$T: _\?Ldq%|<]vV+`SMZ5H*fӫFY?`*j{:ylhG*;tbD]*խع^r^%Z%A+3&I\TO oVI[K#=VR٭GjުumWݴz^6PJ07>^ypnn%_q}nxdCʬ7,P~tt]K'g'~3afppVt>5P Ζ77\$-IU'$}Ӫb]Q)mN/~|QY&/_BoMna)RYMq D}h|p}A+~\h[à v]T D8 ~Eg~s v*Ћ*ƔTF$ {6u4n.n[k2E^,׭ϖ@8*ЪmygM5헕";?nVvvڧYퟁwfiQY1^]V I3uФP:\ 6G`ciNKV3ψ]gîX=vtUr=/]q,)_ W4}4WLqz^&+I!H3ď9=F4_^MdzL5ЛwxIg=u #B\1|&\e|6UEg)\?Ej3!텳,KD 4[D'ip1ER :+χUbe]!ϔNA_ fGIrFɕ8R.@H8## <7{WFsh~زlY+,Җ\#%M|.$Cd$[:Jgu}4~M+Q޴Ři)że#{v缬}Y:c.cRs;zo ˞^E&4 ?& x|N!t[ o,ؕIX!Q N4ͻ}\Nၶ:ɵĕBoy4Ppw7'5k-=ZTY zWIb8X}Uﺶ).c9'L3yUO]p;wRWa3 cW[.5fkk !րTIw!d伌6Q ['R|G= I[cS֎17m=9Vm L<<)6y=ܦ]ծ>@K)Kj[ 4fSE:c.C^GT.+j)8zzUMSI jO!hqf_؊blRI旽A[c?͠m;SFGSEM'?>,tG%6ľpyAtqmm Hq+p8P AGսÝb*`B+N "x b'lŗPT m*+mBZs7>=|ۈ7nۃB[d_\4OwYA;Nsq+#vcsܔG]U&#˯KvQ+aeIOnhG JRRH>t?Ɖ1H *y/=y1hpWPEjݩ37w=+QHXtg D|ĦfG/P&i<,'>ߡ2>㹥=i8y_ ^<@X#v7oơ{3Z#ߊH4TV52aF1WP܇\-%,|z'P4`M rq^ D-,C2Ku9,#)ɼv9{]5Z<^~:p>CO?]ΤCzNq҄?O tCsM^:Jaه'6n 떳-b]^gt[BHJ[r 9S/ 7iw\ {-GcjV=yKEGӢhO iGcyesfHӃj~xb}ŗ!?,V[z.)kpE36&NQM][90N}HEp}ڋQN9eC]7C*=4n^o<}˜4tizl;(CrEQ-w}%j7AlaR|8~1!٘{\:9Yl5 K/Q}6c{-G0#XsQ\o*- *& ދYZls1ZZc1\'Zk=;ƈ|SmYc6Kv5{4]\ meQ22e5W)e(>%ڽwM(:W8}쒥p8֒ wi^z{+ Ue(zq<]wϽ>88Wle& toڗQ]b]@ѢT)QѠӰǍ=nqёX&+;CNTcg(A*]{(!sQȻ84h0R1N}8WGkιv!ȅ>d9cO˃ N\Om\,p燞 ~4! +ALfk3}76Ez+}mڔ67@rkuȥ|[^;G9]-Ţ@Aigq / ǽزˉ7}_$ꊃW4l&Ud1 ][FjfRtY XƒkV%djcF˒=>"b7U7}2J98J"4$O3g#'SJ vOwWx;t%;t7K\L!uTzJwvə &m},:bpdH׺εŞxG?~VzQ,HB 9O29:[@v9jp~j,|\~*bL{O.׽?jgp7m-:3vw-ϗ}+۽Q7|_|ɛy8\ܺ2?VZ#fkL(^9!*jM R]"]#1%#NC3bӖt^ncViNd>'8y>%JX.Ze\"4n"مD>(1J3R3ԒZ\)XA!CTfZ,/^,7Y^rh^̓"myqˋOvĎH)}ﱠEzfwPERuӘ %7 ⛥Px|8;͇<9a)*lp_n8ˍWe}mگSMr^LiR府g,i-S^2{w cV+S!x-UgYձBu&/cF(i14s2G;E_kM( dnӅ9.UN]Q"jߟ4;('7']E˽V_ȽHK, Ȭ$*ͻtA] v FhB>ā\ *+[aم7J 4tjcI -u]0Ǻ0.37PD7F}6Fh= [~ FUCQPE*6lf R!%)9هP:GB׈}JLդea!{OQ$ _]2͜=K,A ֗TK~4E;v^wۧ-f8Khg|j|ƢxB{UJT#X_^cq S퍴|@*d߷ȝM_oqv\s8si4b+&DI!&p0O"1 =:7Ί%H9rG19  BmQϭQ5W8LigWq;+G)G_" ңX:^ȏ-F-ـc%1(N{ "C"][|d !M#O!=Ac='sPvto0sjmZӠV<]Ɠ^,*YhTuA JdFKA(L!ۣN1%qlO;<gnBʘ%":xYBOΜ13OI*ɹ\k^y[ƸGѫ"dR=T^}K20.JKD1 N ;0* irht~Mocatw]\]6Bz+ 1On bW*6IO/6m~2I(5AAx BӅ$ mG^/o(;p`o#bkP;uh(M֞h1q'>tN;&su]bH4X2b!Ė1[Dٍٳ.b7Ɏ7[t[2FnA~ -jr€CKq#gĜ-|X鷋ò}[5|C +R±+=1Vd$"ٻ6,UgiC188 CS"6%G;ᆵ )-jE|]zU]իw^0AP8@2\A+? 6iE&a1_Dsn%Kۈ]P{ <M>DJo':a$P=8 Va2.RcYTN 8UJxH 3ROm"GF%xTrI#gIdJQ b1\4i,Zo#or :-b|f`>yΦ$en~+%`~1~ NJ mk3-LŴfK8'fdFyTݘO:'Z~/]U" b.q OϪuz+Ƒl<x-lŽ5Cmڑ_?E0}[Y(a}OJ+V1Yb9b8orZJ6lm)Qf EB 9lNf.u}Yl$2g z픋{:'^u6W37vwo}.?ɛwO0Q'oy{8q+06~0 ?? bh*ТY ͸){}KA\^fnqܖ Dߌ%LWzWl`\EI_oS]P.|)BB%\jNw$~Gr2-=V;ל"1 )!2H+*Q% U!1KHymC{xC#FFĝe #p9Z((1 kLħsXa]gهܚ)ZՇgq6vL ߇re7 \9mF۫q&Ѿ}_{QFf8M-IyRWI@9̧7)7C«ׯ{S|Cp1]?`J`@oFmx6è|y s2(PF#ʁ,<.n*_T6^Wԫ?׊ |P$A:8E,TuZ7xjg˓R6n9CQUO~*i54/yU;ٸ1 }V_V YANEQ1! u:\ ST^}걐jL~-H9T{=8ׁZUқ)+xV.;Uv6}+#>5&J%X S L%Hp.zls 1tWjjkC341ږuѴ?Npz>L軌l:6}.+)UV kŽ,pb j#~8rwa%£Hڞ^0|)1/I¤.B{ 1`(A;`amA7 =!"=AH k/%% @,xGI4 uDgl cm/KΠVhzWhl;n}ow"x3U|WyJ }(^#PB2InOLj9+j$wv^LKm c^}"ZLZ/۪uWHHջAOnYŮ)Q Hp๑NMs^B~C&S} W }@xF+yVC/`}SoBBj%RNbV\D-iNTRq` p/~cԮFI_%<, ~ܤgPf ɪ|tb!#CiOa}fݜ%Hc5&Cxx„a$9<!ה!CQ+"#jAOo #b'?}s'?}s'?y}s'?}s'?Ͻ<[ga3*3r* gTJb\JZ"ݩw*}N%\z>ϯjzjzx3%RX =VC\>hࡱq=H=@+ ,XpI=] AqzAqz/FV'gOP^N rq&o&9R,PkY53P|{#؁a\GڋMn޵,Ǖ_qL<2^3q8$)bQjw$jR",Rwbe1V9@>q bLOw@]{ɪxy 4NΠ0MK`.o_ Ai:_~%֜Ru{V:RB%dg8YwnNdўN:RKUZ1t*G=KR@C*H>\Rw96L 3B!9ZGOt$pI]2 YJwzg;[CΎfٞ 8֯ս#XJYH_$jRݓRM*bۿ/%U}JFdiuoྼ\T;g {K[nͫS?ӹ*v+T|4ARB zSxh7.`4IM~|?rw>pڿzION{Ю$L0E;?h;աчM&ty'U]j_z*N%)\[l3.-.C𡒯oP^cbv4nv '&'Hb K\ߟ Kl! 4*cK0 /c+۽vڸkӵ须7ϥ%{Ʌl+=^X=B+1Oh^JA" Ďwأ*̚)))G16FLf/*( @N|q~:ZnM>(E^f{vI.+;zvqp{cGww>|=L Gǀ d\,;qԕeA0f.hb'^f ʱi"\Bq $d-M|M|M|M)NKA$a%A]r, 0JiPRJcv&RFN80cȥg|f\[BDT;);u4sqߢ! :MmH~ecyym~xug./t(u"*Rjި'vmH.F%-Wu1`jXNg백!_Em#qpaDPbc%dmLξv6w^:e-/h)={7[9>շ;[.+z!Je߂0WAo*}U/ yu3Lҳ_z\|:Pz=zhCJvakUa5GHZlTQϾsWymCې(-Oώ^Ǐ}aO>M䮺/.yR38m6؃-.f IYs^ωbtJ Zk5)Ѡ0w}e iJŗPX b"SbMX,zuevkY<7cmVzګuC=3XUY<=\avmB:kiKȠH_S\WxeP&+;ґ\cg((G@c.J.5ecJʆ!r߈~끂2R~V wJ1_b戀Ds%)K=9Z.S7ʿlmrKE4g/ǜx-IQY'<OANŮKRedk. Sc`*u367Hsk*h-"Ch VܽYKq6QRHY4sxf dkAS^<:9:]FWMt)L\zJ;FͶ2q} .38N2]εN8l Ȕ95˜?6ZjYNE=fܚu_Fݳɝ=w_ޣhO/2$ռ'g^dmijpeEַzwzڵR0?͘uJsfeq;o"P=\cMȻȘ("bE, [5ϲ6&xCRq9Og1Li+X+ o,`[\xqbxͯzsͿ`)ǟ~9>ڜq FQ`؉1]n/ MdVFj,SL)y"6X C61Fo#fi;Glʗ^G3L:f8b˧%1yǮ'v\Q{ey|qfT}s1#8sP#GhR.5_j襊 24=loO[xEc>>jff&#QDQ]5Qrv8͜B_x(8?vE4ӊ+"> CB|7J2%Nѧ^Y )KK96nb{5S3R)Ԓh :+2CM3 }(?4pq\Au]q&b^qqKF|bwPatÞH84dڨPr-<`e^qq/x(x8;vCn$^_7n8͕6i5'ُ/H3<[oJ^Oeg˒]h ib+gs/80?rp;ٷ Aɖ骳rԖ؝rz^X=HiE9M.}w|%/DMr @6ZG L}s# 5~aw|$ڰVvb-.:dDEZbA.KB}KY *ͻNI:`mԵ]"8%7顈i>?W|9?W<-X*^FtjcI*u]ź0]3~¡)*̇vPT5T|VC[k+LI %=bJ\lz-|bci$dhWggL3 % ֗޶Cη~uԴ޼k]=bےx=x^FcQy'eJ1Vs6Edɱ8#)Y|@*kNX"xw53'J.bB1k'JH"`΍z 7Gn¨8zj9d#[ԩLf7tqrM3qvW0pOrT{9b)!PzSJ"F-d1ƒȘ`Hg ##C"->V /#oH5QsRU9J`[ N[5q{RiTU',;㲨F Rٖh`EB1XPFg18lO 8~2-jKr 0rV`,9YSSf>w'TsTk^Y}]>\d$& ܨTP!{ la"ݤ..D1b' ;Rs}?{6 ,;wfw;f &0M[gYHr2bU,ۢ-ڴMF[j~UU]B<-Uǡ9@dXKE,ZYR_>zcauxYtw$*zݡlf+;y-WừsX:*jaSQBHʗVZUշIi,/EIPAabE H%qxw0J`6&gIϫL%/?W.r"F]SK) .ɽvDX~D +XΫSq.Ν&c.-P `>ӃSŖ=USr&!9qyjKM6B!#.A&xNħ(i(.[67pA%SNx[T(ݦb|2>>)۫~IHw0;7I5U1͉6'AE\g媣Wu˻gUm]몑uֵdŕ|`_fsXd'HtDYyS?oT;^`NZl<-N޿w_;zeݷ>|V?B,EaTAGCGU_{՚U5ԷKժVWK&{K|WD\c^n So+/W - _`QB$ gyxDea]g>b#AqΓy\3{[i]$Him=<τՁzג@)1J 5\Esˌvd)e^d\Fz͆u08nE]י4Q􈠑qe#H(sdL0X. V9y$sÙN+g:okN|%^9GF/V ^/e+ݺ4Pbȋ1:xa[p1y!d"m੐6 [n;|?g?CliX˝';dIIsInnxEe/W*9?iAu5SIRryZ/>k}ϗj *G#.x'*6wg3 ?,4[<FxFܽȄa_A`k<5QBoM;\/jQ#m( RԶe7;Ȗ6H*̟Vʾ^'@(P Q4 ƚ@w Qȳb4C!Kһ¾&c}W LOdMggɩ)$OBxvP| weǭnFr`#-EZ&XXfQ)"`> mB09Vl-ay_fx1΋BHP AZP =߂N@H:-h>)j`?vbTd ^-NNcP.Dq*dZ ⴸdP׮UQż>IM=^ճLI/\u݋HT*nbcm"@,?bX!1,xWǷ 7^^37n.8_8`UYdrhh{4V̅܃]?춴iwZ:"kźšctSʹ@\Ԃ;`ǘQ' QDLZ!=>?N;gx`h9!x-ꅝGE&*Qѷ-Z0vS_|`^e{hVkRAC-Cd BV-t(jJst szCWאt%]!].ɳ++LI.1h~<]!J9hWHWPGt̜ 9!CƠ- tqوyqjXst xὫfhjRnk@Wl=vd`BJ]+@Ɉj %]`CDo RBu]!]q ]!`zCW9ص%BLu%7]!] UQ7tp ]ZK:]!J:hWHW"EWQhIB̡Ҍ7WQ[2n3հq1R %"]'ǴKlDu=ȦYSj=Fl;M"S 7He6b:-ϲ4gb~Ü '5Țal-FiELrG2F#J!܊@uJstIL(UBg2{ֶ7[ JDٵakYWۢ+ku_ jjNWӁO*]!`ӟkM_ ZjNWr#Lf+Q}r:¥/th(+0mq  VwD@W}+Du``WJ7_8ڧ>4 tСzDWT Mo 2)BWKu @WCWLky 3&zCWK%hRttŵpdhؼU3/fTG Q2;ҕ0\tͫNWu^У]hg+I@ 2zH؈X *`E6h0{|R4U4(v@B LJ*{DWZ#R۾URw}+C8Gt%+ Zt(%j>/}'o3ݧ+D)@W_ ]^\6K;p4 Z})f(!ЕCO)epmX}fhu Qv͛g+FGt ]!\-BWVQuB ttũ`3f}+DkY P2hWHWj'c3BzCWf *uBtt%5\w=dXEJHrb$fumn&6?Z%U/4hM`Di@HLx>F]LZԬ1v/1GOpџK"hav{~İ,,o|>|cv{&=GL;+]fyh(Q H2Iگ~4lbpz TP,JKT}w;D > r鼹檪L/m>LG+f ރF<}/R'D209P;rwcORsXhhM?|V˘QM^B`-ņ$k|`5 jk}ɨTp,I[I/-@d/z"On}Z&Zȑ*Ɛwr.zyR%cPG"X6h#KE ՞;:K+4з3YewhBJn lˡ<#BE>th A@kaFh^ver2. "(g. TjU|m%~YX8![Ficm 㒯e ,N 2VCv\v4giF] CAp hJYn(XC *VT@5Jhݻޠ8pT.mK` >x X!QJ ] W c" +lBGkl}O2}Yb$8(.-wC+P*ӛ\K&n3"Xr2*zSJNkzj {- CdܡaS`b 1}u@ -%H`gjM>cbt,,xuavLL'rPk N u& 2XGy(v !\4+Atxon¦b8E2GSF[ WRבe@ U?ſZANd|A+EP8. RSaKvr#QP8Pl,@u'@H<}{0h6C@{B>3 2 Q,KLg'4'7cEUN=:8!a%`ߏQ;v0¶yZs_̨9|oUpUz nԝ f֚!bb{PT8xiL`IPm6JIWo#fBmXg O`yG a/+XX肺az+R|&Q*L&zZct^[`X,.-9/=%f0A 9`͐s9hw3xhՅ*YȩՏoM?ZźʶMZWFNC/ >}r߿8?=7ͺ蓵,!Jq:Xob=ѢeTEKcָWoFeBM ^RaGqcfb<$$<S6/EyXK{R nk 92vq53cCYD +) X-&٫f 2znh)σtFXI" td+%n9vX^Ibe&S˧e˓~˹~:,3m B]x@'F՞F`cM-6YQKǰjQ3y&4Fex31=@9>ِʌ4M<E yؓNZNU B{sn].XSA,S!eL ۚ ,e(!=G \~42W+4OXW,'nrJm'w c(h{=CE7L˼BaE#YࣦIGXA,kP~xkSژbkcc2=t mwbJm^xv=ɴQk$5(LÁ8#Zxv%ZY}~hd)\2 '^Ũ"8 NrnC)cf-B_C=Ap&- lnT*wKcP˱XL:y#) )`ZX-. v^ty^]q;n0BI>zcG 7[Fz+nodm珷46lØ|%]l9}w+5o_avK/;/ۼso>>bٵ`94fvͧsGOx~rs}HG0؞~vurv0!?l37s~yOdÑ:N/gyn+<s-IWw?m8ީxt|]&ͽ0|zcf@~T[0UOHUx(nZSȣs\qyZD'.i9&b߷D@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I?n(@hέ' _OK?'.NwJZ{H>1 $GBxMiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&< $L'*Yz@ Ie:I{&< $zt&F@RI ydjAQ3:@_pV CI MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 @z$ZOp]Mh rpV@1 aI MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$Ivk?6|6Σmw7l7pܵvy~Sߜ]=!nM%a=%]Qp /{%L\ԓMR++?s`j7&Z&ʠO\{te I]MQVCWnZ":]Mw+'Ӛjq5t5&`.%gHW,) p i5t38?~{ #]5rGܹϛ= o^) =ΕU#k{}>o2u/ߜ#Mvɘ6gXopj'˫zk^_n?`nor0g>WoNoW8m?g#A9 3twgu$~O˗'_㫛L_!77ҺYEjo~ͼMmHlse9mml#h2'&S cYVc'Hx{utrzt;u:? -IN-R9ɦmSmC3E?{WǑJ`܇[ X/ a=lɧ.z4Fv,E#yFG=RHݭnvCTτE糳#Cb`&ԽοO^&B{ *uvulD[5YKxqllI"|?{||W7$kKT,A9CN[BfK4:k5j- p6TO؇߶hDZg" N/1 GW"3Ǔ{(z8mЕW"f>W_\\O2O׺c&}2O)[`/H;ǔCMU'UkԱU֙Up# ;<W&O־оOҏ]O+?վC`7g6ʨM_]ƫySK9 J\xu"O.gyѫnyȯ}ڛPϙM@GUg8Zn:oq~LFϳWßviսn/c^[e~\ x dm0DRLLrX.c+ˆڭV'Kp*lO(j]O71Wvn߽/&Q$y!fZү--[RG#+7zқՍ(yӼnD!⢣b;ʹh_7Nhl9_`{fǶfUqx a<;]7u ^&rޝ-HCӽ6o/ޝ}:u{^Y?b^7_h듟/OWvȷ} rr_zzw4oxw1>}F65oQҵ'+Y4}_5> _ (m}yחԛ_z-|aB3SjT~cۿFѥ7v rbu-~?;{w^=vsہo3Υ.%xo_.{-< I|_y#^_k2[iLRɼ$ۉ}x*118򴓫Ĵ_40R})z)0t|-(з\I "#F M\E!u)ōOHr[vXM150b-d' (Rk 1(TJ-8(ؠ [U٠QCÁr{8#i.ڒ_'TƁme0q L2K,N c[-P i)U$QyƐ"dSr[A.*W g]46XRQ BIfšDvŤdt5*Xm4.)^1;u\6pJZ5eeSA첎>[|gLkL4؈l쪤r5VyW .{3ZtBwVb[m`дzBZe'wt6L,S}}]mA:De;.! sY0)Na1i}Xlܚxe/^ҵLP"@tj`bX;,b0 (1B΁QOS!2Y7ȁ &-V4"j4>Ʌ^,慖MGnGWl}qrIbo{$1_}!5(pT.E<bښؾ$KlEXh@;y&"{2vDv/[?Ǝ%tU9|jq 1hS4]@~P8cyaq(Rv{E>/9=djKW\-[sh=nP9 TsI!Ҫz` SBTJxZdbͺۦ1o, N_%YIxmzg]-B+`NL`NP jz;:AA)3 i4_>|%Źt>{? ЦڮzŝwݵH&rJ͇Z1y%ibGbGh٫ZL)K06;Уc*x:p%[M!*'y\hm--vyu~)ǯݿÛVS/t{S]|wfqͣ]37?cL|&#8h-2** -&M\K*qDb1C!bTאt%ZLIš %g}v|5]]!־l x@vWL 9ab4n`#u{̤)ܩCs׶u(Te3y8y#g1US\DWr>J:noZD:,{Pv]ZCcc2KVbεRBHU3{O{畩C&SП=6] >7|]0g7·D1_b'Pm%b^x֥n32?`J&%qܡf8;p(zL_*%{U`*9WK@L0R0qٔeVTTJf9Ef"p++QCsbʈ`8\&P|;H<_jjj}zc(qFl] s,=vv`ԶjO >UI!AQDрxl:Wq &11s^0(V>X ŋnۈkrQ"b:!Qr r6$Nuqy{~+Xq0&DdP[TXC9 ygP#&bA P.8lJ9`G(g8s X\qM hMIs+`lJhi2A p^]L't>5 |j; }V6ź'e.i26m<*$3MTVGΓG]>GexY U^CAY-M`aPzSbK/;y>.iM$c\$ hZ"S"{"RFsOG/czÉ\RրB%92Zah=J۩RaCL[tUY7MʺI QjE",`g 6Kh,/2>3{[R0sZ$!eKCIasKQ&C҉AC I|).Y#MUqʓU(4LjM\h\8p V"0P-)b[{gː7}u/:;ul˕ Ga6J p&PGҲe`1ƉVe"Eb>Pk09rkb{5˝iLkS0-WPD2V`F# )(.3h?ݗkB=)4%sb>LvXK>(Оj4}QK/ d^iM"ayxf\Hy0M&R%\09j])!KHԽ]ę͵yȚrU uxE:u_?vjs}%hjN@ڨ6HI<~>Gnq7h}9أkjl<@+*0u*}*~Ѡ1%aNޡVS\U\U d'{0enQ@Jz!B R=k3 7^ຍAdzx >j6sbp1\f$uz.?՜h89?*"JR#rEni ǘ5_PN圑K5* җ|Exm V45庮9GlYfz k$;GrH']ðafQ>#Fj+W1Ktr\rxqG]NkԮ2R:ǃw>PfPW`^>}7ZCs#Z9v9qo"8mM@)~<4~7B{K,h`WFEL1Vz#NqlW\MEF4I"G@# leWdj^Q\6.r][ߧi9þK7 oƋ+_deyՌ*Nz.rjil ԚuDR=J%ZRΧs ?ָ[ѯAu2)(hiA`716V[bnCtokI?QOs=ѽ-s%m@¿|; rfrļA)*$aJX!CeiYp>&cQtm8n<=6w!{#76})sn52F*\ڋdNQG 򬋁"yjL`GxqyR 10o15K^\=$gYy@U8Bg xke&Ny=%L[oM*(4$id"RL"qrcHwR@ %' }>݌a8fu [z^&mn2wCov[&5=k+gOE$*SZt.rt[Ox&c)>*smga8/xمTFctkf.%bU"l1ŷ[t*7ݥqĘN[Ҍ1^LH"0>z")gĸ~#73  S: !J7Eh1ehɨ 7pK M9bvX7N,Gd^S]VZ z$IT0 s%'€XX7;hv;t%j\LF@"e)X;s̆$ps€3%iV4"׋}%e|:WzC]+Jʢ^ u-S*"V ,5p}YHAH[cu؃N&F:ZD>`K/-4!zUEoDϔ^B̆%P I!*(wmI4͵e &3e3AO[,ɢ?$%Q>D5Kء˺V>]u@ S1]XTrHBU%69=HX.^|Nzo>Mݯ[bf|XDm1EdSz(SOW-Iץ`|eփgQ 9+m$Ƕxv}C4$bZ0Ss&g);%=X/^ts6.8I9R},ɌR<)O:b{3sYsmU0 = jVF6$K༳" X50K0?M:a4|?3QkYZųYיhLȿ}Ǖ͢܌++Fvl9CIvNٔ~afìǜ+'UT孟k9xz1:]ΣnĖ{d_RKf.~7xubЬsrKo} ׈qj.k얺ç O0ҵ9w\iLZ7֬%mlFldu膢J,|Ӯ|sp a3/.V"Ls7Wלx6-hxgWl__MU6Ƙ9:uu2+mJaW^\nSU(0mpksݳ@Zc9;8x\U1w aTm֖P_ͷn$If`@ڛjqRVu/Jc+B]b/J{=Eiv=j{W\k$Wl%•hiJ`qoચkp_ZppVRW WN[Į`-U5̾UGϮFvzpEȹ=+2B]UsQ \Uk;\U+pѻ7€_j^e.^z_gB= r+jGPt'gciO+zy҃`g?\:][>:ᬔ5{:LsYJWi@\6xn eۻTjWSj]Mv5ծT0dWSj]Mv5ծTj/;xhqX3J/dw5ծTjWSj]Mv5oj0Mf5׹}Idk?FY 1Ӓ"şs]?w.şs]?w.şsx)v7Y7IGKoUEl 9j$;q(E(HU/jVu*Ƒimi˫(2"VLL`Ekr*ˌRRd&WR"16]:P(EVY&O[6 H2&)J9eC!9sx鬙9tg89;9cͦ,t5sF2dP:* dMY "-d+%)H))`$DUu9ÉcAU=Q5Z.G׊v*2 %Aʤ-`6 OPJB*E"i!B񑲆!4%TNG ~EB%ٿ]vGDVq٭)ʢ٭lM6RJ.{$eJ y Qǘ΁U@!zWE߸ƒ1\D 梤 )%ȀGީ,])EPc"̜ d/_GDh~#"Mdw'@Q"=ETٺEmk5_ֶyq`Ipzf'y2F:t_uzlKB1ˬFqK]_(vՄw%q{l&0 5鲢uT.i9k6)#EfW XvoK-Y ;An*lr}kv]7sW4lckRcܹ/]/Q 6P<TTFS%b98ڈĪIg]뭎VGpvḆv3uc;]RQB[ \v)`M 5â;-yxif95)h #cC*:Au7fiPUXq,?EDMGĎa\!^9#%PPAz@Y)D)d YL(bJhSI*Gl2'Xf>e m r,[ ք[3iu녂̓. S6S I<8ҀNǁӠ QF >Q=l-Rv<V*"S•G9Kl*1 s7e蘽 %IbSB'lY(sLkj] !,gH{ȭe/-v$އȵIY60xڪeIHκA{lKlcQ$ |$XZ,3BR:3p6]>XA E @>BޢzY8Rꢌ$8 r. .EQ*iԂbd!F:Rb@Y/5N)|[;gÐ>`:>ٖs=90"se`B6MXJ`tx ́(pn8/R bnrG*#R"%1d+03,*͸J B' L\_f+)={F*gp]&/L1qV2i#FsxIE-V6Ɗ" b[M TSj<,QEj, Bb)R%T01;BꖐnS{ȑQ!$\YR=Rԅ$D㩡Xc 1A}~͓,Y!uжq3ָd'|-&ƅ,WNtλ'}؂)_bO`/aOo#3!}9h{3; }}ry2S>E *0AHozRP뙢'vx+fOza](85!JҥYe\6pebA¢0%1۵;6> })=TpMLH=m~+9^v2/?ʅ@casᚾ/M^rFiNk39qrFNjTpٹԭyrcu|vT?xF3DZ`N̟QuzKΑl4:/څSк\;G:Y7 Y?kYF} XF? ULR|<5 ], fWMNt$׍n̂@Y'4>.u=/B6Yb}ٸJ?/w ߫u#@ǿ+:xWg|W_~/@(ց Ax74o/m>] 5CS6Zt9z7Wu95^"$ J?g>¥g_yZ,AR7@'UIA_/BA¶P_E>#u%aJ'P?>:IF[6i)yQdH AZQ(iY*FΆmm!l{ث\ IG 7;F@9Z((1 kL3 { 'ImNv n3ŜWva;i69R"7CXs6wLȉR4gKrߔ4]MI^ϼUmSΝgvAJ!2@vy$ BX% Qh"SeG E-hZGEdQ0P rbrX+/)3nFұr8+GvJWNa%Ms8%<\v&1t2(C!# $g\Kr1HyPWI@xliJdH<)}㕚<}6<%n|(9#PBɎUIn~J3L~42= 7ܓTcV\l˔gɤJ ̇n&Gx~0ɑgf3=:[&8kR6J ?9>a^8Y~IϠd+gPv Hd&ok+bԉoL}!r qs+A b_RR:;孨n3V?q$^bi_$j\uQ@_ꡣl鵩/GחJl*7k%}u$S>'CiGܣO/>)o>~~˶K-Ң gry~U#Enx6vaMo_ռbL(fS_T@&)BJ`򲧁k4݂|ԼZ\v>J&~y#"w I*O?:/[+QN|j6g ub]/rD_tCQqMLknx Q{խG6#5A=&UTh+je7Ɲn}K^Dn1mk88_-x6Hq%=uH@SApȈsɔGg.SbW _(8lw]! S5ZFc" e1B0##*^'L/aV?{F=m,@3{ eWYr,;ןbb%lYVجXUK5ذrmBTv\C^C^ơ/De;iGn p:[oEP3k7ҡnll. 4} FFZ|kwԟW%pgٮor>g;4`y5'Ҵd;)@]3 JB4˝JT)v PBh}Uq cC(q3Vnڽutt0x-4OgH0;]0A;`@23ϧFz%zppA| ٞ oo1oxe_ۉ7~mQ"ϴ4haƣt_ŐXa ꘚO3L= &Ypy6/،:,Е46bj0t_Om e23ég fHQ+eٔF56*x1j+q+6MN48qf(U^(~G[6|z_geinZ~ekLIҶ(q6^ylP)IvU6KVQ"+y6[>jKM wƵPȵbW\ DaoBXw-@ׂ!"%𝁫BܙZeWʥ>=\2V#tLWD\ju*"j<\*pʖ]2 ;\9ͺ*^zph%cV v R \jub pl \zs4OW"X(pu?r򧁫Qs= {xI,hSf Sw6)lI[_Oq*z.: p܇=n >L/[hnOid%- ^ l_ڶf=~wj#x},sƒHo/_]9ݞ~l9$ fey?=@/V0&' )Zi? ,>EI\JW6&24~/:r?)zkR?8W|A=p5(IZtd 0G?D7.PڢFi{kicIw; qg=שV?s'xuFˌd_J46b4#UV[&S)ƭSc Rђ +cU ޡNȹFt%&TY' 1BP\e F| l<>+ ^]Yz^].o/dz¦ho+i,2BU,h.!ָi3zTz(0ذ(C# TS{뱷;j=2IT!yԎœ2 >٘2 Wf+ݓ~@SpTU"8KeEdeyR X6dὡh .?YuOB;~&cQn?icG~'@uW1HD/Zm&oOrdleC gvǨ1Z :kZb:7)2Ci]#7L{h`+mY$ `>e!=j&%GTYXfRy\ĉdNs}9EBW:ŭ!5tƺ( y?/; dd[$34mL4#h}Y%,i j̐se Ie eD^~ɉFOBbõ/PYPskٜKO-wVN/ҩzeۃf(D[^ۻOFj7>&k. 9kst8*}hk@k'AqPQF{ 21(pXUQ[CTBH(H S IѰEt)ڜ3)jqR9w#c=]^זZ)X=>x7Ehu3؞{o~rf97 _ӎs$)h"M:LDm@OAHRrth,`!Kz sQ0 Q` Zz-Hm,K:@:js7b(.]Ajܱ)jʨz58^ R0AS#b6YfL9cX.Rk#YiCx) $+:@"DL,hLM#a}|*@Fu2va5r֨w,D"GoD<ȭfΠ$`KiiFF{ )8!Ŋ3$8)9pFEI8V&47ldI i.C%\s7"~h=⤙ W|dS\q)fEҽwN򜕓$E_ !iP@ZF!Igb]jܱ)be< lQ:gM.'#1.gڴaĚǧ~L.A~E'"p3 JEBC{޾S^bE'ǫ4C~C޸2!0׍O5Ri`@C!QbusC44oD߃4 N@FEیVo@ýLh+QIY&Eg%{ƃ0QEU-vě F-va:iNpAiI+1hkPZ 3 rc]9s,;0FƉH0Ld)Pd 2 YYIM C,gYSZ3yg[.?,#.+OpjyIE$$UYRr:HͲ9'ZhQȼHKE|y.?5p9FUJR%EI<8 iAdNܟEM !h4J0V5U+VqH4C["wQ($I5`S{:.\Pր٦=QZ9B \H˴H7yg׌٠Y}="DY4НcSE65.kd%;AHF?Go. )(=3. "b+BTŻjt@e逗5-ioiy[ U$J< 6y`3+y}iؿRJfؔF5eFH(>e9 IP2%B6Y`T1d %7!UCF I|ݲL e wD,4Hñ$YPI$rGTČ𫑳fHM]0N:BG+' t:"+0*LƂha[ 6:!C!K]$(3;db|& /&k}f2p7xnȈd. h/U1ZEIr9 =?ִ 81x$ BeVojU-]"ed m{! e +YTOi-!Ltp`ȵ`7R:ZBf 7{ƻX_'.? L,|-Sh)a` _|kҝ@10B/ ~EZOi0唻FW85V?|/q&? Ȋ 4mILJ%c_>񱔸=;Rʼ> Z_[տW~n/88bv1(Xs^%`4zQ@|tXobiZ2j$0+GNV #Vcq0ˌ6Ѹ'z-]|<^_j&z娂>|F]5ג(s% R, }6> 4A}i?FPoy&} ~$vAỷ?χo޽??OSMao oM¿# C547\gh]sֳθW{M|4SW@aݶaŷ7$ģ\d(+D6ɴdfhۏT,5*BP & Y/>rĽl$ x&x x9˒Yu -&ƁiTAz00fa+-Nt42#nZC1;yLLj/kF=%Fɐǘ|MTѤ6`I.u: nw#%J3kDDǀ1mL'KL^s'Odѧ)fVi;IB1=m~)3FpXDžswz}~!yԮEA3XR?i; uC/y|m6g1enjS4֧'/33;[BAJ!(4 JH maS[`5J.QB;qt<>2ʍ"֑aY @6A[R.k%'%f$Q6qU-tB0ⴍƋ1d{ݥ3գ83rEC#Y]Kس~lϑ"A5}x]7>U.&1NJh؋4E$"_hY"M-zuGJAw>t'W~FME][wTWȏ1ndWˁ!fH ^ wE%K*ۈ>71dɗ\Me_Bg^ݞ/ _p KkP>NgjxȥkWMz7M}]d{dN;YBM7[Rʋe.5KԂzQ[2ZbRRQ8c*,JJIKU*V!yUZUaU*,8 ˥\R ^zoY:0TP4 BZvߕNnWDmg{X@n3h,e 5%*ѰDbu`;`sHW]?u<(H=?z]حTq/3L-ޔGu}S۔϶tpP-tm`&nٔpaOxG7e\_Ϻ`y0C n0\p8^7}F]0Xr`ne+R˴T 3՟| > j~4 fsd1UjC4mo ƔIiADQ())}mKj-1vJae)B67x~K:׍[#3i0~  9zmo0ad?, S6-he:SCz1xDSzޡY6z GݻEv{0mx`\il= hj%o &Ǵ6aeQ37|Q혘~agNE"C뇚>z Oը nO(FBTD L u&*Qx. kyO/)S0C"3PTއ78)wIˑy" r$E`| BhGЛݒb}4f`uśp3w}IQ}ʬu=/|(݅Tҭjp•T5VL*@ntPBQ-F⿧#?NʼF+2J}PM݄{%N(w;YIeUkpyhUVwUY.x|8hY2XmUU| ?n6Fkܰunac&Jz!!ڱH2PQ1IW M2kMY w1Yl EjԺZgd)ٍ8ilx2Ï/}W@,:@lªDBYCGDa^^ke-e` 0GM,+&9NxQ>8^Ip>txSiMM5ac0D̄:ާZÓ# ;#@H)(_{Ejvj+8O3~sQtп>>L`!.L׉7v&_I,m $%C .axdQ,}Kg$̶{/.*x)p%dig=ha%IέӦ˸m`|Sx87e ulv_Wա٩:y]g_PlJ߂k&^ة# ʞoբ<{z%Ҋi sLy!E/ ++bD Q! my&q=NgbOYτ@Ah'i ^8O/5& #28 -Xz0> eMVr(joFI*N/Ŧ-yA)ıib Jn,NANuC0~dtD.3,Bji*V+)-vSnFN)3Spdq1*Uq̹ȸ z`(XQ"ul-"02Sl—/P8P* BQJGH :qK"2qD!T{$փF:/4nFl% ŴűEGGBGLK^ڣ?lp"gkp -,% } Zpt\pD&\3vH *0 5I!EŚSkalQ?ߚGɿ@Q2u=™r^H͗4crrU5HnD0X! 0|#r9HG(L犋Y;yL)z{"nJb/WY+]r־9}#(˺*ew<aލ3Ň? J& 0e(+4(kꓺT5exN\^+;Vvwgږȑ(uL;>˔,II,BbQ[&KdV!qp8Qi*l#SX'r¨Q`)+E9ɛ#LLߛ KR,gv&y DnD'ڸSo?\UyGNGm%.bM`VuL|@|ձ5o[^+up~epZZᛩ%PoS KH~'7e cMdwo1fjÕDi$#\1WW-`$\VXE-J\% W ,.a"xfpv]kGWAs4p=T3jђ;\(•֖v]p]h;\(7o&z?pePiˑ DqbB J^RJt卪e,<dz߯w$غyEdcO4̏yꏛglq9Kkw/cZ뤇ˋܗ+ݗ\{Ұ7Q뚴Sb?FݡL>d1f;VxgM۷ $宬vV䚿,_k׉+H[oZE~> gg 8T,8;=Su0/jdchj6-HnYV;j 8v6. i{Q˺ewh٧usJ3:j l16g59&4Qw 5\7:),l0&֠8NPr:N.r:N./pyN1᫆?{ )4Ka-\:}aZ {Ka֐)'Sɔy2eL'Sɔy2eL'Sɔy2eLG<Y'+էFLJuOJn:IKU;:z]?'KfOtrGdks[I)Zo_؟n~꺫]>\|Kx[S*o\]]rS.N~A}Ul,jVdvU$́$ {73?>_*i+biR kvC"kU Ⱥ@%> ?⸗P$T$TuZ:|m;YU,'Gd2:qL6rU, $Su=9hPI9rmmߖRUs)+IgT^[ٛP2Bn<ξ&Ng T;f-,کEk\=mr~ʢ(Ll\ rHCMh:ZL"%PT /ʲ1#s /}UeNZMStl_d< ;!Q41*}[8OT?S@1LTE"[/Q'锪6$I2T# C HO)_6*W_˳FTT0$[=*Di+DBdhW;eb m>%045eeRF7\*(PN3O}lD}좤V0VڻiXЭV|`2h])Zׁ Ц2ъ)Nw;<-]Es/.mLtR"b5 DGy xL@@yٳ @XF)EltX6" cYn<} ƴ[O۷hrr~EW 2ȾRE_VzSmn'vt vD Ǝn5ʇ{_~[@J5_haOMDK 뜑rlQVU&ObkJ[l(ʱ*X-p"Lfe`E_z |gF6Y?mcJl ccV쳤oUIn abE{cKSy,EÖ-wކw  [,ʹN@F;e˞N幔A&;tb;4CAU3ڤⴧe^Kkz=U!ySA+uañz21 S›8 5#2F0LHġdUdtuUZ(6 Uj{*(.NQM81BcbHCzK<+]ynC^|t.ҟ5hAp>>xLȵ eu,-5Eq~\3UNo%j^y *MڍLG;ʥ2g3!Lt~7r N{ &ז^.J_-_+$* :fʻeX% y9~rrzi>O.F6&ߖ9ݻۚ~6o6l}n|*,08pSV}PEԫzDm J'x mC-aA7sw^_{aO-:):_P2 VS`D}5r #zQؐѩZ'-&wc1%dMT|uPXdCm 65 o ~>^v}5q]ȝ_~^]$3.ANZD3|6FFG)UUZ׹5 "vBe |EU9D,T5W8{ygguO5Rc>+&ey eqT13r yPYA;C+R &k*Ӥ'8ǑGtU*mYXkb)V:Le \W+яHBǔ1SY 3ABC[mo~@?%o{M>&y}DkyzaU/rw`J&%;Ġ1Ճ$%Ny4#6jbC3"Ee {99Ȭ0'Hc#bNWm59. ̮D ƶ6&ZvfyŃ=w ڟ?oxl>uWyJ"~*AR"bT++$ڞĞC*;a9cTlcsRS1SFh?08$CދRm)z{N Z>],7-gt<2*^%One%?}=_Vm)y,+ i7RIN^.巶~G>UjonDzC+*?hZ_ J2gV@mo&A1⢓Tq̈́Fe芌;aR5FMAg($!OF#Pd ѡ HFn<:(JcbQQpܽnŬCPkMڳ;fG3Kڑp$,IQ*:Oя ItHti5]th۰;)*-H!Mu2m0<)cSTƣv](5e2&(Řtm$ٿBؙ! XILgf O[cY`VS$ۢ-t84H"Q:IEw è(A֘RתB`I F A۠FH"I`ci9I1 `w mxT  hcB+=6oSy]H2)7$R^( EGeJts~b<~!m'` d&pC$#b!DQnkV3Iuwt ( a b6 1 rAPQxVKKڷKgEC%DhV,8DF4QAJb URM[ِJG@f%BT-hi0[` {#I7 G΍P 21Xh22,.ۑ@$pXb7U!9O{N SR#51X`J"LiaepΉ1ЩZZn:&gv˸'R"NH1WMac'Adr`Qk'sT$Upَq8*\N|ᲅ |: 8{͔1PGa. KH^ʅ@c;n60\bh4Z$=9qjFNjTz/J{ٹZgɵ?KgauŪbYsl Fq888+y]qd,1 @)Ci)ckGb|HMÐanfQv`}K3f1IhxVOtsp<^rZ4Jգ/4jӻ2K)RePG:8fe;r\\ѩc`4!?zz٫}L/w_?ip v@ |\6ZҶꡩb -|j7W ^"-mI@ef_ARM޳:WF x&YfvvьZdXc PO&AVOsKZOGRB"Be[Gix :†ݜe?密 Lfom7H̻2SnԵ s-my%ۺlTrٟ5,,{jxW͟&4󻚃uw$MM6Q*0Dh=5:HMJg"ƃ\L;!0wJ!/kwu몚 mO)c8 U Ǔij {Uiҧ\mGt*_<"V{, R'F` 8"#N[y8#Gpy111tJu¤\4$2Pƥ=؀ktaoEbDdD*L<ihp NJ7X%0li1ad8zOwHKzZ&7ۣݭn7Cwj+^jt>Ӽ>׸xK@ RCbLP:ejA[`:"2PJIcTv3L}zYOy*XT!q  (<ʀnQ.08tΓu V#cd=0x!- +hHgT=mV'S#Q}kT{aKb=Z4GCRFJQۖ^`}@/W^,:1sEΙ(*ŐN`Gr VFǭa!K-1ܱ[զkk GY{;o]0܉V_=Bov24.&s?_}Ǜ."Hr$!Wk4G 9tSw<ˢlN,y{󏯋m0γ3-/DuO?xtӉi_Wp*w9{u|2={U;+bZY >. Nv?T7aiX<}g,gP!CZˢXKIj[JO=g)ܵ&ʥ Q>/,x<͆6%#<^#FRj=<^[nZ:âKv;|3vɴxVh09ab P9RWE}Rƅ,t~CV)ro%.zi,2j̆FI)A6+M֘S !)f0=gP]C>1c(m,;M u)< ty9os)M1nؚx| =-r\wy_lq?adg@Ǿl8b&W,WLLJ=Pҫ W@T3k<޴Ft]T $7 Z]ދk:6~3zL*.`hήvLT]wu-mi:?mPՒ3zW y.x}b$6FIO7Gw6"TD`j3t Lik|tuvŖ?ǗRDlwYS#1pLuψv!Xlo@Xi: KAUZÍ[~`-PEpyX,aC R*Sqٟԕx2[]/L]WפfLrJ͙"`I<aLvltjvͳҍwES |}rnC C!fH ԂNCZ{1j1VFS\-_\OaFo/y Y e(B$ ?DTv0%g/ AJ T]v.hn/RȮjya*wpҲrP׭AZ#hK YvhZ 1FWTk҉~Zb9;gZ\s's`͚O ;{8я1ږn qbNiB>0АB@lzT!$DX)7{r C˦ei HwvA4[+M1͈MO٠3w"4 d(ciD$`} A1@4 a`̀H Fe9J2`0눤" *Y%gU&x^/Tzpkm'0 L@ht:z ޶h2M,aDc<ޕ$ٿB 4%}t`0;~ضWYܦDI(*KXEJh"x""Rbh Yg4.a=V88{yFRtNRRdoM)zAy(e%ꒂGI]Jg]OwdHZyRRjUNXdBr)"!ii M`Zc!1;Tz$\2%܌&Lg_ޏ~60ZHpYOog._CRd`T_>W8Bx;va/"%t=9{oDZ,PUrv=Jأ_*=8Jt[]~•[jf2^K*d1ޠ@aqܕ޹K֡.Ww[SAVMav8fYοTewU%jmpe ņwsܓaaJwnJ>;j٥{3P$U^GDW5b,#lT4K7۷>ս|wqz6O]>̧'vV7y fKwuσW'yj~bY~o.m(h4-ͮ>sjϧLӋkvy<0v6E?lʟ#\r)hX2C3,n0JdDWr ]!\Ir+DU QZ57CWQ2srjXU;tV(wE[PSeɉ&l l JwBtut4eDWLs`Wtp%υ}+@Ikӗ]]q+<B2 4t3+ʉ tEWW\ *bNWΐ4 Vj/熸TG_oZrQmp9qڴjxޘ &FD^J~ /h>].ЛYPVPJ.k ݇p%hF4 ȇ45svR^9Ҵ2 ]!`˳+᳙CA0]#]iK-V"|+DkT QZ=ҕc0#B6%X6K V޻Bz;G`MuNtYˆs:B8~;t]PxV85]iZKOCWP7c- tugDW| r ]!<ᣁ^8DWQ;>|T}+D:G1S0((Ç]eBW+wΐ$UJ]I"`J+@Iv9TI%D?M05U&^@Gr3V8F~Mooq>LqX֋LK \ ~Ct%^\M5;}0 jotۡ=o&ZЕPSV9MP_zzJ (R~9t~)M'Ѝk(~\}ʯoXVo2nLfyNͿ.Gr_lto &z_x!cX}x^cw!-ik{og3ͭRO|TGcoE}-yVD_(^Bxg (R $CvfoLWq<_WrWE @e8єJR(pڤq_]Tg @f…*[,X4h[S|$wzӵz;ɏ.'Nvye>̛ˮIqb;]BM`M^t*f<().j4a.Jw ~\b.V;e+p7?N?<4=QxTt6Fit1(o{+ͷ=M_-L?0A?r`o ʟ*ޠ{J D6iYq8fpj!rY1|nVŗ6nFf[Pcp9+xy?Em93KՅW5PMё@h2Tqlgg6iCEޛ){Z4*VVC^ߴ͑\͑Sq]ڜ*"=Aj>5?MёQrE5?BųMYRomUv,9#Q0#R٬ \KrYѾ J1j7F]`-d6tp̅}+@i JX UFtǻ*4&f3+]'>Z觟SO^j:jW9i4I/N1vXT9~4CQ#7m!FHVbA"ٝ!7Q J8u\Wh| b_kH p־{hW_.)^9Lf_*at^ob+bKKHp Mu W+/# *-#FiT&oזy5hЅΉ+$yY.TD¡Qd))yS EUQT A,ȜYjSB++S)R f'CNJK:%h(3xCmJ} KHA3%ergW3-trV Tt, 3t) DJ2܃'%V&*\"$uw15Х}0A%c6Yʠ, e *$[Q:jaFSPfa!hCu)h2[>R mW@\IkM,vQMV@0$w,.-ZA ψI%1Zmh!>ž^'@#+$k7--gM&F( CKˠKZ2 T6 D KJ漹Q&ɗBKϑ$8Aڤ֐R) Rhy}c_2jM|,%*YJAwu gc:~FA_3#;HۅBh)KD&EH jF"KbV`ˤbbjʔhI;xNb!p#H}()JD;˸Qt%J`:ǁ1ZN\N{Z20! #mwB^DOK& ʈ`/=| ZY BYL%X=]F^M֊ԀU LXuj`N%xyRy+eH¡=8F c$ܪwXz\еFCS[*uԂGH*/d@sPNDj0Jkc=S0"x,7?%ȶW!` 0@Kŀ;6Q;<TKz㴭k՗=Z݇SH2 ٻޠ\U6S6) vsok:QvAjʝ/tȜĬeBGk{W: d>QrEt֓-P_UlAM51n2^FhmO J+Y;搛!ՠPw^KC 2P(SP"(>$X !a@YPT4MNM9FU]c)[K1 ,,xt1a쨘:?%:T]5qC%:#]!J@TCA6ML6MC@WJokL,%RI9n/XT5d (Ez@?!}U (ĩ+(H|A*AŽE". RD^|BT,F^$şLwxH A e!fdYk|B n i${j%_`}ABufg -[wO .USg&,4WU]AUvNZ'$D)`|v0uxZG*o#?L!jh)&5 b3x:p~PQJ?l:l<+MH}b ڸLcuLC=GNR@ %tA\xPV24ITi!Q!Xq#`8ӥ`]@xxLPBx|dH֨V~HwBf6G.TBNu~d}:MOU;Rݙ@vTVW QH$7=q7JԮ2XS<)}< ]{ #Aˈ|uC.Ayq:EjC$*jPKB*qB 0S!) 1 E'"<T nk 9u֬kdž@PY0:< ҍ&26{ 6YS@Pc 2}jw&wV,zǚ6S(γFʚrFnĘ@9x{[Tf)t9n(!/{$Si樇*tyF0B{sBA%\͝0mE;XLMY@z|qPʓPK&fc_ >n&'`E^1H"^uNMڡNI*[0jr c,1@ZW#Q\10h \zH` <\Sclci͡М6P76xk+FnQnRHBuЬUVm*P5jyͤL BVcږ'F?Tdb2<|yµԧXb卤`49kݿjyiهÝ/__~v~6|kӫWϟky[wZ];]_\/WJ?nܞƌ?BnO_~m[/W=H< ݜɬڸpnW ΰ:*8N ^]ƸȻ+u\(-@. $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@:]'YHN v bnN M˿`zq(; $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@v@U qa@@ Ntf'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q}N R@N pq'q'qN FIJ@'B $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@:!'[-0_W^\VSnv{}\_ݵvu~U7/a!?qh8%uiFx㒁I>zw]~06׼Ȃ=_}M䦅 7fVɞ+'V/V67/RkHohqbdSB/n7- 8CӞ=OKm Ki M MT_*ă'熡+3~$tbVT)U{;]pjbfuhi2ʝ tu:tcH׎3`OG]1ڴxJoޟ)mJGYI[]̓+"yhjJXAW~]y/zuv 0tpC*HK+FNi(bP; ]1ZNW2*+GϺ u]~1V ҕ#rv+0t(th_:]1Jb %w__lême+߶1vK}n[ 6Tg&jsџ~P3[G*MQZ=a^)zzz}ԯ׿ds=ȱs(}M|-^L3ڥ}Џq䣭wI\~Y?޼~Z^> [S0FՋg4uLJvUօJZOyΔ㣢?@ ؟;eȣuxsU~}T:$NʻBZ[.'NMk>Hw~݁Ourjt- s'Nܺh2?&mIma7|{I[wth7EGZ`> 3pejhI-~jQ.%S O2N: DW 80tyMݏ+FKvt(:A .dبq5--^]1 ̧HWd@t}BCW S 2گm@{<]1䅮NRʌpWav1ZNWr*<ÙS,#<fH<;. BW_og{+{bpZNW2z+#+÷aaˁ:etBW'HW6@tecPN CW S 2NW@ ] ] 0L CW QW6ڥPzgND/h+ω]<&Q0V?Ʉ8ҜGgΎaT%&Nӌʜ)ҴOo:(]p ; ]1ܠG+FOW/cBW'HW!E@tAS3cr9- ptkɌCW VDDW/^]1ʨxCGyWDMG.g_H}t5Y؊hAWQK^+z ptu5<.n<z2Z874 ]1ZNW ҕդ+6JCW (th).e4BW'HWNG@tŀnاB;R "]!h+N.a芵UfhIK+>qi> ]=]C5l0%'xMƶ1a46&c{fb]Ղhh{fI(IY՝e"xwTXt¹gl?MQ~QKZ7[5ζkYgfqbiY9ũ}Fǀ2{(bnnbv̌d)إ aanmtK+4J֙OH#mI+5]1\;bufFdCW]PYyw|,GtC{f, JBW_Y@tamK+FЅFRWnF]mдtbV)ҕ%H ÍЕ%cuZ:]1J/tut+3ba+{fzsWJ1xtE&ٝ=, ŬZp9ų\4(펝lJAXypaiE`-#adiU#Jv* CW ا0CK+F U-:Yɘaauat(I)͔@t+vaK+FąijkYe~+ 2ʂ7CWT_}t50#Oτ{dLGs&ʰ,"py՗tL CW 'Cy+Fi ҕA:#0tOQQth--%Y+\~ bO WEWBW,^]1J+)ҕKI0CW 7 mZ]ˁ{ܲoT.k(,_1oTcQc|s`V'(gϵV3V3?iKv5nc [vZ~?}{~a6_W>}% 'cj_]-|ݍ=19y$2-IJv{}% nj*d&K$7#Z|iT9A3 2Vqlٶ]|&Ա<a:An11 "Ҙm>͹bٜaBA`|#JZL%rHx]'"]w=TGxi?}[4&5n)0h+S(dbN"\ hcJ@~Yӆ8NQ9Wh.e#Mn\Tl@|{1)í{1vg(ӪhPjű:ЅĜN)Urd}aiJe~齮nl.nz2`tsэաҭ]QkUj@/~zsu%݁Տcꅺv2ykہݦhI2*$ 6Oy塞O gnY'ݢ=ͷSZ5 . ^-ˮXK$ ֜rR93 Aj+*`m4/Ⱥ(&>AqnxJlZQP+"V gK9KaMUtNqTk/II2T/㌅h6OXMʁaB!TZdU( N5 &%T9I%b͹tZRmW =-3߸!L-|!t)+@&žcZ*]Ih NƦ\&<=UuڒAFyCz+0孾˫{toVm_tGr`yvBOxqѧwѷ?X)>WȈ1-r7_^2(fRtt `3e|7foqEqD|5?$Vھ2VM\AGz7_1%Dg/kj5k dzGvo^7ҝРHl5onx$OxҫŬî\,* ]GNai>KpS֮'F(?khٳsur-$].hx_c~/)bgxuhb:0;ypXLycnY~]o"h1g=FaKI(AT_8뜬 ]1gE9 X鬫EȧT"xXM06v tid!|r| e8}z5c˰4lns48vs3.S ZΘ-ns2~Z7*KYdG\ų0_>/D^x'k^`{6ttS.vnwVTphMV&'OgptG/ ƃ#2PT 6h[ <l kP)+8U \ Q\jJ4i9A۝2ҙw?yQ=+>JgG [i8T A̺z5`I9rΗB)$0-ZDjRU ҌP|KmsTWĹZ`k dY'RAoUAgM oO_FB7.9\u/krVeJ!nR*qIɂQ5ֺ+5b9Y.ԵZ8$s e? ;uc.;}m 9;G`iQhe 808Eg4t2FQczJSROB($PuUtI.!1F&΅BP§~u|) EHo<‹UME\4l]tɈP3r&])`5v!v"U\p6.8n:q$4/PQ\d9!(DN/3PsĠ|u!˨{Tt$aBOSy5iG֧?+gܟK3D\j⦡2% W_񢇚!V)t=[n7_j,lgXDIUfYFL1Xb<"mF#jX@iQdɵQꌀ| ONlUκs-:1E_–1\=;MRœzI9V7V6-#=]|l=Mk>O!J3K޷5'٥6GXj~v F.g jߝyӞ.dbTFq`1;$] ^9xƓ$Eqtq'5]M'^t?pZOC$6NxYp:*Q2&af:V9R ϱ̎fCEA:d CtScsMT[%I*+adci5ETQ\8"v4 ;R4X밝~AЛYXK_! KB e$Z:!l2&2ӯh7&pHf7&n e7&-nȔݐݐ{nz^[ǫuHJ0Za ZbQtΗ{+,\sC+k,^ڶK HitL*"Y2+ J3K`DJ/YY~_[6ЃcX)i5 Kq˂PXrJUB ,R𞑺eQ(&rdT: J.i,AOu!0 2xj(EIƂGc䯱&[NFk91.|fٿRct`oVsFƦt,%~Avl _$d9* !t=,ĕ·#P>dE< 8W&3u 8e <`Pwٛ#)(홲 vtW@(++:mqkB¥K֓ۘ+rVqôB1Kc,>,332NY61 jTi[^%`Q.+6=Eiʰ#3ÚOsRCva+Ƒo* sXzˉ)C'*[{b|}OmݐnnfuP+3f1btLtyt848-Z{%hs zmkw e HsX>?.5=/C6Y"Rؿ]m X/ 'ǰ^H<;׫/88D>gf`l {m$|k~\oZҮbt-u=rKWG ~.+evm@~??+OҥgBu@΅wLЈ p%q]7構a/Sa]RN}%B ʥi9O<.^3{[lԏ."/ґH }:i)dyQdH AZQ(iY*]#ݷy;x9/-W$#"B cb؉vz-5?LgZK&';_GA/[]Uxӷs,'%9/Sv-r͙A6ƒE7m$? ._)S<_^F-=BtX[q'M:έGnoBAJ!(4 LH maS[`7J.QB;qt<ŧa# (7j<[@ZGEdQ09lVJHT#uF!䴭C8𼄕9]a}s$/a3Sv\+س~Aw}0CWg ]mvc̱|6b`H<@rFϵ,wJÃ&  QՊQ[ 7GJh> gټz9g\2~hWt{#iyl)n@][Ja^iGjd^ wE%K*ۊ1-SΚ,5 #_ teC͠^?2eŷa+9^6h}ŷ8)ƟM}%v+no$d m0ϡy#RȨ$Ө4XKrAXVjT8޲R]zM8=},;S2Uy inytFS8޲tG؎_|cx.hٍh膹@Cp6(ʽ I&,"%5!<'Z$ *Ju6nn}:.nq vd:Xqu؋-Pm;u3tSs37`=ᆳg#Q5k -3ŴT^{t4KZ9pfw.bPwNcA3Z`(3*h2:ZPrD`p'ul3r8t<6eW_w/59ސwk^kY݇E9>H %_?K&.Gv33I|{ykk;="iUiZP ctig#U>JxP4z1R##{N\6@B$#*۽+T9,Gku7:M޲ {߫ +} i£֡X6~~O&"Hr$!WkUirWMdzӫS3Ma2IJ*@썧G"r3ұ`d89x}S}8fa)t8:_fP| !=Բlk3I5cbSyz/|!^(*,Q%e#lnX#^ծU,sSzl= |txʪb8>|3=`JPS07E]rƥP Yr+ZۋÊ$y,s&:>D3;/>SؑQzh XO\RS2|cPJ!P5;pdn'0YcJe0& Q]. <2H+M/:zixTR ^"mnw w^Vmo%E)]jO}?̽odyﻶ_<\ܑLZI؇o>}Yme>zzISUɢ'oKS#L V ~;6^2x>]:?f0,$d؛ ppair͝aWx*L@.gEv,phWސveonZt]p N14c!x#0Z#IwyO#60/"%dL u+¥&LI8FҎY s-1Kȳ.:#gH6KiCGm0Q2t9l.U{ۖʠ.nOFrZOE˦hLFoqTHB;$#[1%RoHn=ii !',u!|RiBHH(W-(3Hg|)T_8/ [Bqf1(P{k%,YJ'ϴ VZ$fA Tr9L{f f@v#gv]6. ,g9(uQ\G$QHXgv_S栂d|)t@nBp+ u~zj3T~3Ml*Bs _y/̗▱?Gx LйVpb >TN¯.!} &&VK!zR6jg>OLj9+j$?{G!{%w/׵ r^Oq;|1-Q?r{'g`8Lq\Grύ &Gh㋴=s~>i2Żs|#ZOKĄaDc<@jJ@,^S Fq;ray)HAC<$3mvlwv|KE}Qǖtr$IUk.bES6yn-` 9Zͤm%,*65Q4m.{f׃;=W<8WJDL5䬣UiXi:T)dfT+Ic8I>9+]պNGV<]<|suYn,y|N8a#hr(.Sm+Ֆ,iKC^ wKVH>1vuS;gz9ifUuqS-/PU}֙\~Vy@Cg%Nx+>?|jfDBu I.oeܪXMuכ~Vm#[Glu{=7 $NC(^kk*N|Sg/8G8Gvv}dۺ]jn'q^2xˮO~7|8\j_{d=7:*}D~SZQH<|, }cus~@L{4#*cWK)Qk&?f'*<#zBgt+6LJF ŝq%*f\ { X~D JJTyq]WX+\ܠ{IW%qri 8Ʃ}i_ WTƉ͈23Q#\igu D-q%*g\$c˦#\A0}8.+Qh긂Jf\"WuG v+\ڠ+QÌy9ٿ#\` DWةJT:5q@_U/Y;czZvұHmJίO՛ %v{㟾(LYTM>ƖUZfMW\\໛C7ݰtV6{86/)EKZ  3՛A^Lp3LV3NL)7ݚc.\{F+7ײ:EZ^_^ϲ^l/ mvg%gXk3K.#;ZLr-JY}_xOU^]DEu̿_?ex6ch6RռA efߜU:xA"Г܁ /T<6X#BdtܺuD=W]ܡ̮=F> *-C 8Y;*>\\WOW㌫3W=3h\W=DYMẆa G 6e1kUWNW!*V칸hY#r+Q˓_t,*q> O`(lGb\dpڸU|TX•+;M6t+ \A6\Z+Q10ppe"VʬPW"w F +Q9f\ lMG1(Ot%rWM>g\ dǡi _8]J.qjAtθ:D\Yk6~zIDV# }jǗ+UOQ E&/q!bҪ)6v+˶\AUzF͸:@\mĎp%c?\bvIt!*joW"8npj4/~pElz:RF'{8a(NijJ3Ff\=M@28Nnp%r\0y\iCjθ:\æ+\Akd•5 D֛SvZ(t+fm9_ԾFJT9 mϠ?ȪWG'| Q.߽@*wvL˫K݃m{ݪhwXCZE*'|y'k !}.}_we~n:7Vxruź4xRׯ{oz {V(rBuU ^+Sji8Ӭ/|B|?{S>ИBЈ{w(&#=|/Q ڗPڲo%d|6slPBI}59]M>GV妳x7/:Y¯B`䵾iF>܍}vr~tt-_D1P˓~+7CE_#0d}X嘜5}v:)EĢ75s49eҡYUcf*ԩXRUSՔj``4>~1Nmpk~ihR+qTA67Q caB}{LT֜V ʜb2.rk5X *ckds-h6(F\T!Pؓ]ߨ1= gHxBU'B]]_>- RѦ8kcgP2|șb_o| CQiڈnbTKa%Bb ![뚯͸\A@VOkSXl)6.C2jP)N5sS fV|[*.֬Vz0p!QS ޥf=oG6*1eopAh6RR,qoQhDQ! xb-Ӏ?y۶GN DU(|UŢʳe 5ѷe/yXtUȺSHą%8xue2CE}Kl[(- ĴUr [9+9 }"6PB ]S(q(MAiHmbb K ܂U\ ,UӤ@@6/H\J$w)BȠ(WߔNzW%#*T[ 9$8b  mDo[vE4u%fjސwVrp7DHS֭ ޥX@Ya]f4@'rŨ3|e0ykΆ<[Ţς#~ ՄBl\ DjĥDqpȤ`jX wx& .^V4TCAN)Dpq92|Vj` L;YR xGX)R6:-_@N)#!e##YrªQF]J ׭Td/ې,z^n!G%ʦBH/eJkek`8H٥؄^aAb Y hBcY3&mdg.>`R/Wnb] ,b>U(6VEB;t' /~K38aw,vae86 kAGggʻ@Fl3j'Q-!oC.mbrQn*ʨd 6Rft%:dR%ذ1)Opv>uv*)t]xOHZ 4x' LJ𼦠!}`K!j #.nK6-d!y $Ȝ՗MBt6 E* tВF $Fz0dsɖ6ɵڹKn5*އX[d@ b rNME聕Is!z~"1 7G^sO3RO(d,iJ nqFM*P@6fSNm7I":rc+<0MҡބCt]Hܸ%H( -'4]gF AzgrpDh NP 7Kh!Y?[R0換7Kj :%g嗭#@E'Xy% x>}/0yCkqɫ:X/̻ 1trHKҞo'"ܞB?4̖ˏ':@zen1/ryJOiD|J-+_/bAWZ^,/\C+t͕)BnvXe\u0~i $}&Ckk2?K?Urj;*krytZWr DDZ;rlOy@1l9N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'б90@n'}VNN '\\Rux; Ʊ@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; tN I҈:B=N 5'џ D*Yb'9Cz?; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@߯뚜@|=N KӞ8ք;@GA9N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'8ͥ^6yˌRWݕun~=[` 2_q UtzKq)(ecwl\StHp r7v\JWGb45Ip %k? Tz˸:B\Ũ ኼT+D-"f +w'":' q5LnP ^jJ9\9c^W$jpErW6 *p#ĕ*x[ 8zpErCWZ>1cqu2I+ $Z HqE* qe2T+ɍ\ARf"F1Wɭ' w-V>YYnVmrNKghzOpK{W:& ghx/Vw߮<|<ug{e!Gսإ0Jg* *r1(A}Y7&֘366 kiVؕNgiWN[\/MQޕ䄁s%_Jݣ*|?[n)gy9Y=O7oz}cK?}1_USӉشMȵbTKةc\%ϰhmcRtMLoh?NwY/>u7pl*9foe׶E5FDoMӅ-ݩpGTCF-h29zO٦&d+ֻb].CԠwʖ}7{~ܕ7WC2Zg+IjlZ=zL~Ǘr֌@|6I?Unj䉰z RMy_GZ ^2MO-JS 817zQ"jpRV,Ժ?![Wǃjɵ\ڨƎ+quZ_Ggɵլ#O}Z WUOv&>$غC1Ln8 hj0=\yc^W4lU5"^ւ+rSg xp Z 8h_ H5>"3Wx`c]5":7GWR[ʠyZ_ ]G{5DW֫TF͸:B\a:xy3yr"LX#5ւiRFUGigipEWqEj蟈J͸:F\y')N վ\Z;G 1*8cbM`ky@r]5HSWP# s="f&$f"WUNYVaarで@STy%\c^z+W,W$WZpEjqE*a\!ުpAjpErWP Sq2V+e5A ij=0cĕ NjYHpg rWVűT1⊎pq *~sLv?%Iv4^wRA-k:P; mES $PF1̤RzZ8©5=g&ߓwzpVT*Ǹ:B\kZGCW>Zqh0Ov7G 5%|= jsfR,U|`_>&zarAjSP0[mvpWzIJpEU H9v\ʱz\)  ځ:LS-;H#ĕRS z+u-":s`WG+#޴U\Zǎ+R$qee#0}5"=mZF?wE* O%gcZͿd)3M:VI:޻vyWQ2 zH`Z7vLJGi)[%Jn5I0L*a\!6A]` \Aַ;HG LjRpNDj+RTFǸ>peB<}'($Q2WkXȴ/\\+kFH9v\JWG+5n -Ӫm˻0 0s^`[s|"ZvE5[YC+9X͕\Hk9.4,o#;05vhi×]ÊmKn\U%ޢwBXKח ~ml׊M Ь[Q_.̯V7jSנq|tWf{Rg7_sy7o Z&|y^>ϫ)uM[hu]Mn*:BG^_J\s^Lռ-$^]_\IFȟRI=UWzK|u]6w?-2\߫.'PhuF宺Jx\ R^޴؟V*]`W\OOOekJ2!r}Yt D6.V΅z۪؈<l.#&WlZ&yoNDNA/,!*:;gtM0uQz dTIAp)"~.fCe},m/1u7]YܼX7 {w߼l1tV~~q}v3{G/N˳uil Y޳տp⼽|)7}!xzVՉtD*w"rwm+98#;`S[\luLJ&RgcYNҡg]:;^:=n-ջY}f.[{47uV ]kK~yfuPny6Vo }xЀӛ7ߎq';Y6c5 r<ˏ'5tpr{5H - 4Y9S\oί?H -/W{_9t8~i\_Jv`,{ĭ(᪤d_}&?,Ldn?K695rMc=%+ڃ4Xb9RiqQ>^m]9"'S& islhr*'όmHL=JQLum"JIU;kjbcJhdZw{?o}e_ߺP.-BD*&J[kZsorC4c1s弇{~H9yn't孏֬%+(%o_vj5^~B4(n.ͥsE2]YCK:'[Us}9dUBR*Ȟ{zSs?;-I[D*]o9WmY40?f; .pwADp[ȶd+2IHYdW}XDtVLHmK)u jQ驘]kge/&&'EN,Y ' Y4 YER@kv;YΪuv`fNRT9liY#)"聈1AeX1DZ8 `o5MK9Poӏ!딄Z+(usPȜ?)pm !g9SjT+LG}L !-#F,-"*Z+ {{vNm(9Y$edgA@YlEX&Lή:iȮ!+[mYXp!ni| s(r>7rՓMH6,|J (]S?4JظY#P:bY>i@nPvH@q"Zij!,R$Ƅ!ld!dʸls)e ܓ@B(#.(õ138O-9yHutQЅTrPc1$QԿs-MQI !K*!X`hU 2s#%dd !>kēV[*SȐ>J/.z7 /dfysgЇţ/ڞ=-M;ZZVy{d/SpF&l9, ~Gzo`)w?BTH˻@}^ Gq^׃Y4g!Fe}(+&5Rr|ۦS]SӃ4r;"!>1Qm֋Kq0WyS(]i$n^/S+r6] |$cnۼgjaFsPQ{[}kXլF|7։Z>4Pxwfڅ~]dl0N1>d6}ו`<tQ@||To94N1'i7N7u#6wnc7 6'2eGWn?9<N vwAnuXFҦpqQ:֤>w.4AC~e]ݨ6w4jH'^gaxvB AћEo_|o7i (lpV{][YkoѵMצMPse>@9ð\n;ো߿mП7Eˇ*8(ϰAaG1;EpbNi>pm6Glrgʖ![hOO6_DggKGJy^zO 3qY0ށƠՖl\sХJ/Ht>EFbeMV("ق@?΢29&& W;W8;޴gӻ}Co\iZk3,ymiekm(ޟCyHyKwѫPi]mG:R7Hzn.Y瑨z]e17&Cۜӓ! 9JWswt%MbܭQsu8ύ͎EQLjuZc ?ݨ] ֕8<0sqϳd: (cP!N#x!nd]r-¥]MYf|Ch1 Ju4xƃ)W1ui; hlzgjv7߷o;'b,n\G,'RRG#ʠX'`)j3dlyv9bun9WSL)mfr/Bmu3t1=Yӭy;z62mkqhY(g!h˞sk@xn$*B%FZpp\grļ3qn'<$2!skHOF]8[Zd2͛Rʖמ}}|=JP|s+z[w??'Ůχ*YQӢJF+,sT7U?3?s м)/Nxz~5৪`b+v2M_-ӊ]7Vv-,m+C?ބ?rUK5cq6QC̖ob,fAnЫ(hTΞۻlJ+̜e 3Ady9X8tv $aBLnc6u"*.1Rz|Ib3L6є +eɝGAC{O~wDR KS`CR6[dl1H\B:pވyI1Dǩ]H:,=SL^N2"RK t6 T6jlq yorQvpϯ0jzc UPɓIlDzݽtrF6OCR LڅjolA**ߟbG4)st˲5]%sI1L?pLk,hʈ E" +Lȓn '#䬼Mp#=ZL Us?2*հf슅2 ?).%*c^f<.}Ui'vCGѰ~ም R F섈XML.hINf(]KSN/7(m]Ė MCq|LKQ`SB$ߎ#ϗ`ÙR&j'~8մcW֕Q[3; d)XV F $'M@\9 qTQC`Ex"GN]D&u` |iIFY0p'1H%GRpfB[gXEqds.e7i~ {?BO LJ "0 ! F$1Y2E +knH'{M֑u11;>fDZ[a/U(<dcf$`hDwu#*[O;g4d==ynGX&"\ȅ%.Y<wElw]n54ΓyݿH7b{YNIPh372qku{gWwva+62Xg7m #Amn'-6x۷t۫4M22ɝO$zctp$!;ZU~ll|дr C`m }z7.]/՛Ļ as$Stݯ߾~ϯUO_ ~ɚF9# D,4?j#d4_*'vt6Ag3LO&φ9~){וO4tӉida?"[9b<7碣R~ p槿 5^r鎿ۋ9 ru'K!B2͡qՌ;UBdt5^8P/)B1pW`-5/~˰Z_7?)^o緤Qu$(w7N!.Z_~q9<2|;y`ʅ~[ȿ/>+D& rMZFA R\]DӜmu}T nUjL^=2Ø23ȍD#.ݩXUv Hu֟@rmP o]Ftގ|*M1\5G7H)uЃL]{uݸrV0P\!W>A53EzcjEz1x Eb0=\'mڞzIښ|Tyߓ1 mY6Le,l(8|QcŃNFE& ;MoU\#9pUA3HjV`UH]dL'BI1IުDJ2Kڐ؎[`ڱ'ZrOk +mA{ܬsVJkبCykmf|Z" NKSgz0}z_L㬴stf[k ;ϫ'Pr"m{Ro}7/% K⡿O?+f%#[HN2sW50Y<<P}I.zl7Od|B_M6IRkYέ4Z5  4q_$с>i@h"Nˡ \݅7_ڍ]ǻ88eN`㲀!. W$\)Wvyv9terT_vOZlf]=MnInkUmCVmW i i m miJOW(VYFS @#uaWfmFf x%ֱJ4pB;~oKZV#nM LGkc!Ŕph Q/!=sL$t ̈́RH#j ƃjǹ02247{ ml KC̹i|!@L(\pe0ۊ@WEլ21u/3sc2:>gVPV΃%*Hu 7Z{K!5\"eX:(CCͦ 9 ./}6 ŒȑeZSeۘ,Aw(05rΕADX,,.fn=Mm_pP(F &ewRe'(x9Zj`0(d @(ǣnmlf]0kIwz t|1=`B su<*?[WbN=n ^!_ N\%!gM>?`N ipz:G]Me =Z1qT.iL$)2pѣ +Y)șb_:l{M]x,hKڡe걕]M|Zjkx-ص MN?Uʫ.E렺I}ucE5gU fv5V`O? `O4H8&', p5>#:>"_ jW+#4Y>tthn>T7j(pɃ0p&߷V)B]D @b ') sGo< 7yCٰQFGԿ|BI <lܙ!N Hc,nu Kt>cm3MA=-Ƌ{afSs!c9 ^B>lתgpjgf2fHWi%$4d *J"]#k%7J}QE Ì{j]ifK_;JRI=u ԇx ](qHW:?v^$5E&r-iUL]1Y,Ad?ƶQSV'.-UڬټU޹]umk0rzFkלS7XȐ7عfեܸt 5svlYx5QD<~3thՙVj?)O?Ng=YژP&9:oxx[IZ,Q%/(j ')RY'ivIڒ%I+b>V*.zʘ$biʂBotI2%Nzc q`DP)Xh˦ȹ?J.M]GChzSP$h3 M )2.<gW!RbHn&e l}D{ !HSSh煡PIF+ar+vd=JL|H@4*Y`TYgĽnҳ]Y - Nf"*O%ŀ`=֡=WsF`wl[c ́{2GuQL8B(Z^8l!)a Fh1ʄDIMx"Dd8On#@5;˃KR$53n ȗy$ۗ(jScΜoDazSc);xv"$2Z\0A$.D<~$( D[Mz 3Qh)=<]÷ WpP쒖s=7R  ,=`\# R8zAL}AX _4ǮQt6|\oO>Lu ŵ0I" +D{ սeG""xCaЉr{dCQGG#a;)DĤDs P8fs&Yfka@7hdqqXrO`pe5r6U|Kz-l.ƍ <˙ۋׇ3\[`~OpG٧Xb\-J^1z'p-u\dn񹽌 o#WB (iqS\^v펀[ѷS E;[Ѳ K(( !h(K絵,BkFsҢ 9=M'3kLHڅhp` VHu !lkl?q:Q3>>nbVGye,is?L\JOe Ѩ{WQդ&憑!:EHԂOxe:y{YnEfe>^Լv|>絣[EŶBE]YQ-e UE4|ΦS+]V@UvN(dR5irW XvOTU,7,R}^׷fguz-k [[Kˉ䇢Ş1v6q09RA0LJa OQ˥u {P:$;vLmȍK 'FQT( &&5P[3ݚn}6ѭtWOՃ]=\NˮFPaWRn+vE{v< v `U&WCaWZmήJMzv; v 6`EL-t]e*%l@>vîZJ:2L+R?=v V fr ;v ]gWJN{vٕ,(+ G._^E]d'VO8֞NV0q" ჿ_ &Y54l˩/y@ #'qJ!?! MgJ$q(l:S:g*lϦ_F!L6î\N䡰Lʓ=JȮ;8r CL{ɽypC!Sj #K:<Ylm. Z].VxQ&ap~ijW]_VDW>VD!XW]]Wkg_ B5~w+ȡ/2ć_&8j\}85ZRf\WOzRW]ap? Hݎ㪫qIUP3 rQpAqJ._pu42dp|ep\"7 Z㪫tn) N4r}}ZsUW͂#ĕhHJQfkW].swW]eXĕon O;5Vy7v۷ݞ~r/Jms]iR.K&o?_O./6l~| ~Zm}[?%Micspr~*}P=w?yӣ6~ar@oh9/.onM^ݠs]X̜yu{}uޞ凖0}B>}6)=xUY'(5h uW7}!ՇpGߊ+%bV,V2$@S:ӫ?OO!sgk,,+OoLv~Ͷ~UN8CoQ1I-)7Z]Yp8ƋZ=rKe ==p w5I+CGC/^?y=fwW%_ǜɯg @]W/իWA:][_Ѵ7uwꑵsǽ@ ]`?tze1W6o J?WoиxFltkYmUogwW1gF㽿ϟ[1sɔ|&tNǽ~mړ7)7pܵ$P|?R񙬺yV(qa9> @A; ݲc3pE <5g|`0 } ~!ޢی{wTak(7uU/?^/R 7o"|i 5jh`}d)QD6FJXVlh39o%i C# Z. hu l \}K٩]7MBQaSz}I.F )fcT,X.1s1XSbl";iAvPc1JԱD.p-m;rgOX+"T{哧Pk(T2#s%fV(AuI۝jѵ}TS9_}jW,&Kƺ&H9.Idत:RcU92X#b! &3im $ۀ1)J;gΪrQYW=CMfݍIh26!i(rh69ʰ6&]JGM)5 0$8}Y%&ּk94>d{E(3$ZI|tx0upn(Б_N.G[@˥W60>"'bc^".e+  I"902Θe(a4׊.c_=ŋ Arѡ1H!3XxGWҗĢs{uSY_<{]bNTemE5Yrfe53D!v "}X`O Zy'sUQO|Of_ _{`#Zz|wI. @^AJ,. 7 G0BdZE)(v 55KHňaQ y]55_ ϒ]Ipr7m>^=D$V5 AK%&UB%VM![9Y\j%}G0] ҽ,vOCFЀ,n JKFҗފ^ "i; 俽(y+8za/ IEzjh60G_y/!:ߜoE<_io./ꆹW&Qs`š4&Z ;iVKEƬZkty$<zDf0c${rzGCoo7ƿow[&E9q/9 Y\oglŲhI|@UZ#YJJi?lr~bCG 3jYp8w/%'^ ,i!G=u/8( ]q.3 Y[,#C3G=L@N} )r"tY:2y3 GʫġClqg=I8usƋkyxI_үd~}D*_>.ttu~y9|9Mz:r_=C >ox;mNֻmy~|GtemvG/1ql[x[ |f't ?<D ARJb@Ȝ@Stq5'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN s:s9'EN \Ыq܈Z@bN QY@tqr19 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@u"i:Gtz@,79Hf I $sM A~6'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN shN 5T9{&q\j@6QN V9J<Ȝ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 4'з![ZO٫Rj*ޖ/w/ nw\닺Ywts $d\bzKIj1.%Z%V ތKS0.KC"\` I DMJTr p%[ME DmU IjaJJlEnB-O+V ;)j:9>~no؎Ogy|Xo-XﳳS|W7KWˡzsܭ__{>4kn_lfJ.6O'WKjʟ]ܧ볓U+|b,49ciһV7x4SO@?.7ܞ_yЇu×n?rOe5??].Q_}//ABixx;oo^ڶY o ElaEurL:i;+\1QY&6()~ՈG;@zr}},xqD5K;W6&e>N|k^,ӝ`/W$8nMoZJ˘>e,D 1dpϗ "n/EW#.컝W;"ɀp< 8guv~:޿Ll<4?yX!Q *p釷_ZYfiAB}iAT"X&@X0BP+  W6V$+"(KI"\धO>tpDm/nXev)*dPOf5e1|e1o*D EbYJԆꋎENeJ;]OGW׮Mc] S86 DEXpN^ DnvZpjKJTB4\MWPW"$5bY׃p%j W:SR+\S+\Zq%*%SUݷ5]Y D.Y]}kWjANihO%rPj;JVi' fѩYXjǴ$["\h`U\P}T)*c1\MWcq^Dp]ܱw@ʞ3!{0E\_8W+ W"P 8| W/W]@`Awj0rt5LXa*kvʆv(•x=kEjǕyo $M FJ䂚JT;D%j #kMѕ.z+ƮJ W))r'IMtj}~JT8P WUwqd2a%2֘RA&ޞA[Lo\ ú5 YY's^րX",x=) YK-j=V'bGN~|IVsG@&W~۝J9D"LZʵcUz 4ajp%rJbW2q%O8P+$"7)cVUMW%M .5kǕ4K Ugח#(As{;arț\SG:sʝ"*pU>y"\`pzp%rA Dm JTֶ٣Yp%t>Lp5{S v\JDqOp%KT+|Ђ+Q'i Wς+nMW,RR+[\Pt%*p5E\ŘRP}8m=a-n@,5;KQ YmtX;E%1(SȊpł35JԎu ea(CMѕz+K.k v\J[$rNiZzވkǕd J.{<ۡ*9E"WO2(jspbp޻#WFtfqc~=yl qk Wz_ȣ&\({,\JԎ@p5E\Rpł+\JTh aJ+s\\T]T}2(*3&20{5JԎdp5E\Am~+nS>*cu@.iuHsD0vO?p N/?o݇{'I叭4Se/cNݜ,ϛzmInxu7e6D{oW5mUZ?~zmCyx7?]ٿ>xo͜o$y|z~Ϝ]>|z׋o]Qbr7#(Fx5}x{^?oHoS(ZEOlE芖+Q +Qza V 9G &lk4Mh~u}Ed9BϹĞv0g薉9 C'=1dtp$Sysiw'΄ˆ oW]Xpǰ +t 9鮹3 >ʭT<]?o 9Sp=']׷5|yrΏf"T,ۇP8\\R[;_L9rN!T䌔r@1w{o7luxO}\s  ^nKc,8Nx'y= e$x)>>?ŧg.sOD?xk^"y P`xw$g1(Mʞps .&cAB T{'*Cn 1r}wzo&'j)$*5/&BmLC''y=C;E*Zw?p TBe̜M;?uGT8(ϔoS_D y=,2xԈj}KĽ5;t}͋AP=A|p7O{C?QeTĂ͏ܘljS>g/&z ~b@i>q=j2R UVvrw@Eb)5iͥv\J` `W"艮X.z5ѕE_;De2\MW!dEXp}ar\ڧ. ?WjBqkr4hJf5bC WU+Lz֮X.:5!֎+Qi4qr{m'{ }p62VF Wz8Vф+ W"\\ RJ^?HnpQ DϵJT&+1vWP0L0\\բJTB0\MWH)+ g+QX%k"$Aˉكڵ0ծNw?1;yf VxNNގ~N )=ڄTV ! .Jjʢ_5JY頞z#'\?`b1~KqᛥU+MNChi1GzNL<|L9(ZZ`9yq#r,-Z=,Ŗ&rJ>=Wq*GqE9҄+]qjpr}ZpjUFTf{[[lRf_6)\u %$%_N!)D3Á5޾F7>t ͨd4 B* Ǿ }+@W/ t@tep70Xf ~Ct%j^NHZǾ .%GΎuCˏtR]U= ]QWCW!T,ʾt ]1Ĩ^w,p fteQr5 +0m_P ±rC@W/B򮄐D]YL0h)ޕE%ҕdRLv'd%?8ɥ*}xF_ O?~N&+<^NQdrR2bPr/qR'< mQЊBl X`Bl{[7[_d-+}+NA(õMWZ_n%Y]|i N+T(4mBϙ-J@/A*TT:hUʢ,]uBW*IRZ9NZ ,J6HD2Pѕ,!p ,Z }+聮5U/ODӣwOsK]eG+7g{Wҁ@W 0P ]YLBW}+@W/8!Ȁ ̓+ רP6#ʖ)bHWE@t9!Еi'/hyʢ[;:+I4A' Z<`Т=V"J3/1鱜PpJH?=܊n;4i Ws`Ѷ-J=/ !G . f΢teQA+맱`EOeYteQ킖:+è!"=; WCWV1w(ꛡ+US0;pcwp|ZAw冲oM.]U=eZI]!`8zi\ ]YR,JCzt8&$ 3+ WPʢUw( tJs7xra$jnyv2gskb xO磳xvϦl~ZY&I!Rq"AsŐYn$q9.ä|P` )3sP.HLq^ȘHYs$qˆ 3ݍT/d0K>]5U_K{7W(_GoEFLUW(|GBLOTR&"LBpevmsT rSDH"HcRTk\P{v}oPv..b$ww$}TZ ೏W˓׳x~iMoqE ߖIQP25\3쯮~a+WNG8O'Ju^vލcRĿץOOd\Ҍ*ii:[xg)ΜسqIPZ\D.w;?4L `Dt1{Ug *3\хfr p~/!wVc\Z|nl2ƏXXOwH+,%_>ZHhQ~Շ޽ZaSTk(΀RD$:P#'ym}%>!%H>QWvZW#|>.Y%^NnS~A*4NTOi5?'qֆ.wQ"Cn<toi=EV)*Xy9K7AwRW^8F{}30ק:e1 & ;'UC\dZW1x}~_N^٧k 8x&d3S\D!DWՈܿ+1RW;zga+OmllA_HޢM5HlIÎfÚTa9UU/LWg͊ɊjNߦ';\lhW6hXOTz5ajRC ?,9h51_%*ۜbrL"WiT{Bȫgla~f_nfna[kJ0P|RC>߹]߉qprmIM!4-D4Ƥi\(F!O)"E"B T-R. 'LNjCg!Ƕ*p17H|1ڸi",KDTGB42 ORCMc̮ _s:?TiT䕾}̉sFhL`JPtM YEPr4+;h62%;1 (C.v D |d:9y0`jԈRWqXK}7uǐ#g7MME(n`L`d Lo6S!j" Gt2SCq[ល3n.CPC*-U2 OAO\Yzէ$xr"`e'ΦdZdtl̠ "R7)Ώ!z/jX fTf9J @a"8کud`-u*`beQ>fR mB eM%SR:lS?qL281Bd5{r)?mL"b/vtR:Lp'% 9$;FuRB.Y͊,R: fto pFII,pI.Fq.YB=T}sܾe9ddu((u;ٞr@V*y)ǕFzL=~G:Xζ{ʜ\8dsjyԓV=l?Z$&r ay(ۈ@B*2oeq?o=܆ TB)أ!ĉߏN[3f>,+\Js9ɥ(LOu8*K4Sit2 4ҡf2@J(0q2P0"V4b xu4G. cjNTm+Ub@OKTMAsjnHn!*<EJ)=μ~yÿ"(^jKT=L45]RepבYIcl 7ahg۟fek)t %^u2Y-dbA!}iY!:o%+PZ!_њ OS@տSI@Y٩\w)pdH#V'Cl8vNahk//w{kݒ =Yke)$ _0x`n,ä7dXfC;@CF'->h9DU֤(gb) =YkRLS+L㳺n0rqRlfy15\?N^sD:ٖ˫rOO_xzR&7'^WlWQ^"he(3Cg"JtN7uyv/ »iN}ܠig?W WFZ)]D }msޥwGz~q! h0M͙&-xaQWT"0>\{Y ǧeO @CJMqyr)p ҋLJӊoo`hÙVP,BIW7hXs/з2= 9qȗq@t6na2O^!נb*ĖEtS$ x`l@ROG* }/g!6z9pۆD9Z^scISK(!m%t0G<S #ܧ7{G D!"&^S$m?C<#il5"P/(r+Snl5ݵ2@/O|`RIX0PФEj9Jps/Ȯ 3RDڇfRAtْ"w!'IJ!.B&1 !M}/DJJ4&Z|$g3Aঽ#:0`!ew5(Fi:gjs1QI(_/bճLGcv-|‰mOؽJyJOh`XxiSz nlPx&ڿٛ <+u|a2ZfIn2^Cm28zLA~{\[#oj/߶o8u*'d_Lr#N.vbWD/d[g9ژV`·:3q=6.Q'vYƺAcUr`l ge:: E kdy:/燂Ǐ+dJT'T_&@Җ'kOEHM>uTpr֬SZ) { ZȌ1:zmf*5sONiBоIc݁KeƼ2cʦv2{Ñ@@rﲼƋʨן S>٩l4  Ҳ+ NY6 I}-Dq}N9PгUMDآ7M`;hl9#pwˎӴgmE77.LSGMjk+Os Yl5c/N]3NTv4Qp%1O-wdOEe_+zƒf#OwEf\m)66Je{.h#ki ѽ0JEz}']OKp7^EL 45A1 -B$+I&5{xP-U ~CX'%`]I8v+}¾6ᙃv89c$L)JR.ޠ % ,Ȍ蘞#oFxކ_$|btAްF3{nUǘ8EuZE% f1ͺSaY=J8c^B}݄)JZc<[LpJb KcC^[QR5GNw{*㟳%ڱYpBn^>$Y]'Mi򂅳4d; Bbb||w2sDײxr5~ͦޝbX~ y'٤ۿ9|*+{,ר?&-TrpIyN@. ,}֒m$KwiA'2LoOWd%3@B@x%dcX:HkE*tS~{\qeUV"Se20dD~bkvحEvkDtYQ3ܭբSJmwT;d"$zr79[3/j\IƂU@뛈 GQrR*p%ReDK* aq وk,<"MfY^3Zx昜FqzA`+c &PQT )C_ZHn7SxoWdXwsZTYXH(Fɂ!3KYh lV#"iE{.PX1OAfZP% Ժ!`AeFBsaq^rV^Eu9*gs4(u |&*㲯 qL}N*B|N#>,hXeyyIx}=aٺ*Y*AI-׾1ӰvlL\5(?5I8G5$y$kJd3Jh$#%XD9@ZJKj;84/s FZ#Ð{AFaj9m^B΍Q` d cNŨy-0sd(ڈ8qlqݳz M1PFU0&s[47`7anXdgn"Lz`h ) h "CbL凜^xfFUZWqήkm\!s.^z.i6!:^~ W/sŊ.^2?; ^JqDX+ƍeCDA/)(e4` EU*( z7~^gLPVL~C$t2t-,Ջi ޔ@Y}LY-Z)Aqڜ=`)k[4ĨBWxoS;ĪFCMYk{,QHa9 Ҝծy Bi |y6ފl؉nRqe^x$Ry)Lh|c޶堫EG:PuG,8 :s ^D+()uپ8<<STt9z^wL|iZݔ2Fyk*.@m@`Z1+ޞz9)f"%VVѹ}̻z(˹a3rǗ'qs.xl!g@( fzokk4k V9L7=6T*r|>ޟ@NQ30*sSK#fb1򼟚-|XV0hU~ Zj3E5|{fb PI,tÜX(E R4gA0s''۹L{8Te0# ̊y7r?tLF~;ч{sy1s477~IbR>{ǒG޻ᡴldӼ%5=]U^mb.6x+j22-^n+{qE?E(Q$߯QzZeoy΍^<̗~K 軻t{#(ؽsB=,f s475oo^m>ܫj<Ś`)M6[4QeJԌ?_7^p97+ SEx8>8p~^T='8 MX&L6woR>#=pOt|6fՠ/IHʒE_ލym|ܪ0aRbvu/c_RpߐUnaΈ 5HR̄A*+0)^) ͳTZbLFՅĸBy‡DsDj[|单06Sy)G೙ŋTX:"*m,.bTmgGjjf7I|rt7T{|VG8>A%8 u8 6T%H4tD (Qt$[ۙK9[F`(^RPPsN"U^th`0isX:| J#!Mշn*SqyYQ0ۖLo ՇQd"7ל9?!cGG  KrZkMEVS<򷩃`E!cP5p<paL!c|5%0 ?o["Ϗ2;Khޅk&(W eJ\mH 0Hȼc$9Й10wDx r]!E Q@cCs|iSY;9Ů5)C-Z%KAd JXcitBa1'y.?cg%44'7>G:;'it(qE`otZa!c+ˉd'%2 5"h\|PFO.*ߑ `>2m<yfg *RhjU_o!2bǛvq*u7&.򸟮52Xr ԽBVg̸5yb,ƛfuY^6KĢ`CX VydkTf[:~xZ fL)\Ox%dGԧƹvګtX.ix@/8VlME4̞,k˽6OΨWR?psPQT FeۘB r|*0kғXHrY.慡LE`}~x)iQU1NyXhxT2(U7(W*MĻbpOԈ8" ӂKTM[eusMD0R`sЮ:(|s!V8mE96~08 0JNyJYׂgݮ&E?x/>zm_Fk h߹|C0cT^u6YԀyi4mF~Yl|s6â èI&j~COX!qzMbxHV.!8vsl"x2n<^ +b!c<\4ؐ>]9=F /7o7Sn\~2ѳ|8+X[I ƐV SûէM~gۊoanI˝d^>\p͎8<Ƞ䩈zZV Hp zr7府+lp . ­@Vu*Q6 8@*TA1)iaU|^ ʹᒟ,S!Y~goX!cTأ.G#nCpAGUt޹lpy$j)pdC=%lfVb\S=Օ!]CƬS`$4$ k_6VfH!oE|ܧVk[iyؓ!FxԶ0AmVf qNcaTOUaІCՇg؇nLJhЕ3RHaIIPa(P‡#wnެ~} `*7CnL7)\ ɾOk0x'mOwO |_30Hb%ԧhԬ~ s^ܸ+q񆤶jCv!'4:zXΞ&s 5j%Xo`0E]߀%OoZ}}^\'}KRQw`+\A`Pk/C7%8E d1 z'*#-7*Ku㷍w  QJ8(t4Gؿ}eW[sVS #[$F?~hLgI{ɱ+l|$A>e6l6CYea俇d:Jb`]~|GkeѺT9"zgաSh<^aA-D/]"?A!I)?''w$%]焊}_4K襋?})X1:O2٠z}]9JwޚpN9~.Ɲ*ߏj,{8# ,ߵ\TnAޟ9XDU%P=E=Z *v뼶Örwj\wӊpM5$h`w3+(T<\jr;5 3T̀pcءtPeO_icp|/_ lA6σػHBƢxRcPylpeڑhUЀByjDT-Ռv-{.Fͳ.fgz} hݶSQsyQ1[cuhoJVix€ƓWOو'YԨ`>dr7Dl:LsM:/f6oqɰQ'a(EVF۾ᤚg2Ѫ|ܝBc殘ݽV-zQ 8׾9$֩Yl:4THR-R[aw#"4w6Kg ےhiNJ ,QL6b)Wۋ%#$S*p\|l%3)KM%P`y%'!5, +8"P:.S؊J9o,^l΋!p LONSoӻFņ$ ΍BjA nh v>ֺc2;.R4SzZDoJyj xWMbB!M&p])Dsq41ﰁ҅R"ñ2{t5VΥI~IMjGIzfRxkB;Gƙ\"yxJc:,t_jCE4N6Ӑ12?t)*{۪)Ha1Z2\P2֯FM]T|ƆOC1P?Ģ9=|nm ^jSz Ĉc'Џp6KKL#t֋25${׷o <+L؆\Bš4rQ~q}dGc֗|&XʨG 5KM@jS7zxB`Lwkqbb{bwCd3(v$G4t>YOn)MNmPZG0upOqP .oS"E{Àzkj4F΋?y r>{kL Is!H0$&d\>iSGn*SMCVq"V櫻c;\TplSNPo-G@rq1 Mծ{i}IwZ{}}˄%ǻY@Q iBg<ݡ7ƷY1N6'{Whw11+FI 7E*ORITKYh@bw?l}PD^~3u% Q}|7Va8w\rRJ{D3ZqNJ>{R;wò2tU*.KUƒQuSMM2WB5EfhC _[58ѣuglvIJ0f6GNi0t6dl脢כzաdT$g>8gWOjÅrںNɮfk9,FbI<3oB6QJa4? 0+t?-.x[s9m >R٨#f,9gb&:쥅#T+/wvtD+(F5!3(l w%OԄ }8C<^w}N.4`Nv;']/lV=\h3O/; FPգKGw8zYlg#ѪTļ}:g8%72)v Ȃݸo"ZKI wo-P`{N\|+毹.!2uj>z)f.y8P#LPxz(o#v ? ? T`AVCmBq_}[_:cE@al&).(tC7M>I&[yjMeƞ]K\NdOP: sQN悓FI]{#Mb>ݳf/i!F&"ٕsPL(3U`Cyh1qZw DINE UQ} ܙ։*ѯn8TNUۋi.SHϩ? U| =o|iq(*<.~ב>ѝ,_Uw.?Qַ"3G+,u/~eYQE=`:j%¾晑0_OG}NpFFLk' v[%762Tp?h"7/X&q3Tj+6G051P&j@81 Rfa\LA0GY2eY x`b6}1BBd a, u]Ecт1>8UxtEjƂh/L#A\lK\l 7\$67,NͪMȕ1"W' tjju>y$b%2Υ)QmwJrL~.xc}V$?E(QI(|\ᷞ㡷dNmwP֫K(G,:Z<KW~IJ fL$YMtdCt#ӣJ *2]R*q]"o }.7mPu<)/'wUlj0?@r.s_SrY:)aWd/yUz /- wL^Hęt!cFq* ۵/P++xZWZuh1.[L9iҩ ,^n1,k`ѕtdь^oQ50*DL$ϊ۪pEO.&4 6f|t_w<Ŀр18Y܀\i!q$) Gŭ\i N؆DXOC 9}>{qI="ƪ= 7B HO^-vI+]V[ym,YNi2uh~Dx/ (HN]8ԅªG :fBb{ga*p"z*3ŵTn+fO|wWpkBجοDJr%)lP 1Ѕ\k@¤b o1W'@"Ƥ?âհX?n+~y$2ϭ,q"J)4f0ZAXw8Gs}xt4q~;bnYń *6OJE,L(SEV ܙ_Tc?]9v+UXD}pyね ym+fg Y+CW% Zd)=0ET:Rh˸ #5U 6dt!:(Oxi`NZJroxHR 46T8o?qZ>t.,!7fzF XS_U;sᗒ/%cnD GR-bX)B.|n]R)-|; vTl5{j(Hn+v@R Sryk wALy5ԟ1o݊Pz K1%,yƺSD#ʰ <+ђ%9hѓwU(V@ : =l5<4x"\ LCG TQ6lْQluXs_y]3g rTXB\?{6V\K<&blc< zvdId`JMIUYMUc0HdXwh~2׳CQñrd*UFCSIE ݟ Du&z06;wCRI Kk JҘ0Lcֲ/]9E BfʹPR^j ad*TY%2zm3)[= #(lxB H}ThID/ vbDwLS'WAwkd5g֐DeZfatnK33zPJÿA?ohH)DpKꁥ _E`RLc"4cSRz ו#{# UvIXiZ\啥I(Ȯiѕ®,I.قI#9-ku,KijC! &Ukb .&IB+`8J%^w'7q:P4go#ȌbӧP^Y{ YMnBDӴ[DFOcJ3<3ʭ%hUח__|m(X%q1 2xv뙝H@?ԆuUZНO>T.f e][-UlUs=SR(D,HF R$* W8,)5%ٷE艎27瞬ߦ´:-ǣ w__lϙ`,4ؘ>1[w*>_Q된!y7<ި jKd)N,q6i$r}9}2D^Nӹ*tYסC =V[Y%2m|SF𱲼w>gms6\A!{ftSUw=Ghɧ-VBi6kKV#UUn{no6ec9h1ۯBȮyw=v*8ӋDw?ڊ̔R6/=,}ͪU5:V:  w#a#4V}s1s#8_`HJ]h.>ц]ԠV7i xB&Q:ͰԕVi}~݆M.kf8}bPD/@WϸsՖ;m$^K ʦMm[̮xz+h@1@@/OL8`07l& .%_?&_? fA~_|j_{%v]{RrX`;<+ڦ$X$ٔ/kx?z~ī ưC/b=op=,i0IKf o쟌ZѬ_d:Ǘ߾'()ՙ;H şG),N"L1}u*IBCm 3("Aׁ4$ "3|0 ħ 1ESڹ?ȴ4<Ҡ.RSEE$WqZf#nvEQO *qۺ>|Oٗ6=!oV\B6y~[0YHUeljt}a;Q1=u4H9r7Y1):i>u(IzsH[ >A_Z peGvyG._YSbI]g Aq^$ CI}(`)(IR/ͯ3ucc~o59v""~Y&*:3>m}fi1JIžx~`M}eqEZ:Co>[ Yvp庾`:ف[H`g_z#ػ3%ྨ(@݉ *[]y_.\bpr1cAuG;BU";b3NK|f+MLK,bca.û( U.avҹfO eVYeLWƈע+TxPYQz`xb4G1i!1j*%ѫ 4A˘7U/zZgv<]4+Jd0MhŁ/-CV6(fK7|L|ף-o_JOangm^AZVDʧ]".mm"}^JG5{pzbҽ &eW"a_~%J.r~KdC 7owrհewC^M&?OD^YȊq.U 0>ǍC7MlBM;M}嚕RO퓢ET3i!q^\ȩ{9by(|μ$pY 6`G8' Yϧ#`tu2U~?ą>;-" =>^ fnxjHY%xDQO}2}Cr z T^]&hC1W^Sv!U7WM" ~Hh$#ΧVҴ(s u?_\Vq: (c28<@ɶ%1a|#Y^VA_qR86]M9RR2< X\ƵO(u4yKdџHif,t<&O KdpgUѾ"-zn-Řm[Omo?RB-@| glVs;g c*X^#օ m]W9s.KECK5t9l\ %2zpTweBU.96'y9Z̐qW|RW \›J6(y\̂cdH-(dڭK7+^2]T<WRCUeNX7C o^y [``|)!\" } jP;(aY> (SiD)!cخh]16XQ7I-Uo; Hp҅SV(Mӛ݈BG =3~7oc;OLR5zeU\\"Ǖ g1/ 2)70ި LVKd`HF#P~olpJ A~[ 7=Fe ˻+~=z{%`MTS<ۙ7?Œb)Vߓ`e0~glve/V2{٩duyЀ W g٫doAA3)@Q 3CR&A2T~KxYҋhY6xb\C8˩βy.-zs\t+mjKeϕ,5E n{wa0_p-dG FVP #vr5u|O}(i 'F+Yd3!7~ҺGψHGM] mZb1wUN{$T¸rAhsNrO _E (þpqin>eMo'OU!p֡mNJSɊ'Dƻn7 01m罏s3p9KuUju^x|w؃cUT"O5K xƴOPk̦7ߤu<36v3,Z30: aʀZad=q0 IHAA0[/O~F2p: \U21JUi Fd"Ff(Y;aW8}2~_/,J>emV{"}$4kZU$eD󹺽1FüL)|WdTȤW_WéM %`U_:J?9Kb\pՃ0",d08PcpDE`ۍ"|eyͿLeI4SW-Ku6!8[@fXݢ[8<D,yC (o" 艎2_+n! Ʊ'"#_ۑ]C1U0]qA.akȾzW7Ck;y|\TpX4&BG 15G" \17&kX]h Nhcҷxv$~ p(.}^lԾm8!hF}|Uk句5T3J8rj:U{APA$?++u j[ |)g&U-F[bl0 )*'78gTb#]h sI;^APN@Y튂]~K e1[܏u;W=]~{+L I=UadH Dgzq_ 3Rt`}X`Bqbӛv|\v*VyY_4ЎJ%: $@x&! Y$6yxZ[9AĖ,!%$ďedBv#܂y$CG65$}/xOJ8T!>GhOP}[%xA>|}cr*UjbS)6Q;ܿ SFBDаލ|YQ籫kxV>`, yqhS<; {,^曦?79_PT s_oT=o Y0 cCM'<-'c @xJzr:O"ВJ_})'r$}q="m<{/dQttvz2{ԕ _&쾪LdN Sce.v7|t_VFZʈ.N~UOz%jCq SM،F+6eеUJ!;cjNX`4JI 5#T\|>N2G7RFzū^GX'F?.Wc/^±fah`iLN`4L1:`-cs(=n̹f5,A5f:d%-z(CJRg ;k@C1:vXYg9C(8Uop f d.BELYg@Euҥw[h.fA!aՅ`s'v4h:gvc4 fb$7-RprB*uwrAa8IOb ZPK&"ct!,=JgGD&EZv` 뽉]JZ]( VL 6a ) eՋiӯQl1:v x v$( O!e7AЭIla+akN:L:#X3RCR Z!j18v >{votBHRD;%' 8/)q]YY&==k=&M+ l kC,[.[]H.}|@y}ѲGn߹h #z9]h0ωT;lNO6ET/=Wg-@܊ֿ(vJfhS2fљ/qɞ`JuΆqt!8: ghyQ@OXhG/zP#Y[lRxj$0PM P@ޱGgpGq1c*O=K"(3qi9krݡψisvΆڭÜhޅ9( [Bᛳ:gYݳv~.aA_RB>끖& φYav0G"^s9;Pm7q4=˹떦GnzoJPݖ1Oa>aIw)8^T+ˌaˉ'#8t]('-P^Hpi6 sABZhBa ߂!OnymgU{%ëj(FNxY/) bN#bL75SIC20ǵjpW_^*Y㋣z%ƿAkT/B.lFLtp-%'ɽp4hT$f'Jj|MuGdscG:z~|#"!-_O/E (^C;mw]ExE"0L{M,'?ed#>>;(Oڢ`  ^\!*2ڬ2g~UK]&o)}/,`mfJ- ɳ'I$|[߳_)Iˌ zKB'ހ$t.PVX=S*AS\~ P}/T{}/v;EjG?צy]h8?)3<:->u(KF^2JQj^2j YBDqu7@()޽6/DI="#wUB b7))ֈqu"6]\P嶊0c+pYGiP="NBKMET/jzQ+5/jG@\=uz۰_[2BN U/8zA5/8|GKȗrܴbT^;aKND`-.7-[+8>,lҢתVռN,~D !}= Raf*O~ߐM7{M^ojmYvG%Y { x]ۂ=P$CUCڼpݩUNwjUSwjEƣMd1Ou2?~flSd/)Xks'\=.%Xyh ޽OVj[jl^YWxz,{eyݥ & w=(}ׄDݗ`/~jKrO_/F<KC<{PG]X~[sO sm}&UgYjo=o'}srzpx%cT 5e- q`y{^r=cߝ">j–zR-5kl^<Y\ FЉeϟi|sVe'ߧIICYQ+6k!GCF j5 FNkBN5N4K|$ _] y4|H"!I!fK:Ih lИߓcc{6ǧЏ#^yzccQ0.~L`HdKb>7b?.g5U>]bU/EZ{fK :xg{yx $tzZjY̻G$4ZB/3>'u#ꌭn3)I qF3,)~'CD\sU`Ɓ Fn;'*t|OGvONgz- J59fd7#-[&x+dwnC/ct>]ti|y:/uX/2|z1﷍j}'~c]so]{7ŏ9hGzqF* u6C=N 26 _(b?טͯ W+(=dRٯc~lUŧt_'\Շw^w֋v G=n(A5 Ah2oL60b!o'(˕s"C hquk..;0R:iTn`ɨ ,3fT.N)p0[X5ѧ6۝9qg>U|Ι3O ABEYxC^ Uqd3fđ͈#8V&F^oTeC*+/fQj"S?)NQwB'?/۱®zFgL|2BW1:VknBUhk.Q%9XLԤmms; =Jmԫ b<8!y !'B%ȽK$zҦ;rKN4Uop ѧC ;@[WP|䆦#Ze8= VEAڸYT1 uՆjBQ7.Qf২q}¼;O=Fw1`y[XP!}[cKB޼xNڡus)!'U jC]*Fs6҆l,G aN;m]bW1[D5s0BSXcP}@+qӴXw.,^d@3 {I)OL1jp>^Ȟd`n$05^@뷊N z "%ȩf(l.U tj G_ƒRw"!9]I\RtMM r]˕Q$W[uƈ+2 1c{1t5I<))#E}yDRs3|yw4!B \v(BCTjQ!q*2(r;CESJ|[ ,ߙ39n7PZ3|<Jrȣ1aю*HۚWyOS&:ܤN澭[ DGZIGф9sN\+߼E qeK XJ5^N`2N6ݫ ~mϮ Yf4p9j-8 :-"C0ޠPh`*"dTx?9Ƶ9Oq[g3g5|M' 435A#{Ê!A֫%W,91v)A4_lo<t`XѰj*V;;G_eTfjb9 NQwYKpy37QE*{nҬcb6k!Fz!Id w_9d^ twwKU@VFR{A9okv  C\U-?p P`axőVU^`z`(Q^!VK՜j0+v]kF*jU}6s `ANfY;TVb֘qϟwr|v3>u͒7]KYBc|zT7S5TȤEu[[ *8&IJrUO9@D~{MBJJc%JC Ar2X+:i֜|~n䟦P7:Sd=I.\V!Fz]Z Y9"^ؔjr $ڢ[j핅 MyG:(PPIHP8QZg-eTE$ q/!ڮ_lAZ=9jʪlΑlI9H1w;=\p- g9u5g+d-dO-XC- ZT*srNUQ$' <3am+I(LTmb!;/[oW*BRzNYa 9Ӕ=֠nRIaJ#wL ЌOj02>ɻ~<讒2ِYqDC`8뜐kXkQ̻)";Ԥ+QҲ:+/_Z1clhiI$UET3d6"XoUP0X'ŧ@jjv ޻fcG_hG,?z1 v tK_WB%o\b~MA a]ּ6nPVݪ{#Q##WPu߫.AV.+-qY|,IL -Q^zUFXl/ҩIt2є`#gfjkY '5 BҝVV%ZQ /-[ [E]W:RRę.O~yCq ռՂ[[[$;/J}S!& c<R>rȣ0.d^sEm-+B8|ޝ|n{ɌN%g3#]P ]՗GW93bΌX+0}b?.*Abg]]K¬<$Hٻ&9rWnݱ{ܰ}[Hci<_`?KYԝR5A@(k5вzT|UҩfCe[q)H]^S$8BR9mP5oҁQ{_1-`3F O8L&m"TՅ"IWQK2mq'  cyNZpy_ ?̙R~01Sm>!u1ڌɢʋD89tΰHݡ&$Lڠk Z3*c%*'1.Q申5QN3?(XxB{+  CW`O4gp=S'Xp#[d'n0X5/h-?&Sܬ_D`?dE֐[.G[sЉ+)É^ y2NT\0ޏ;G;kA~&%q߇,] UE[=r=U:Qƻ;GRQTu0w6D nA<8*f6Ew_9|w^륊YqqŠ˫!+"Ã҇lt_sч/ `iK{N ё-cgu6#qj a7hAæ0z }'P#o=3M*v:q[ #\ mn־\oݠݏ 퀼]\By(.V|vw@۠wl94s2mp1 8#A 7$!p2?Mj3<У m#[{v4jlQ/ttnnvF ЎN䓇vی] sv"6hw3b^]nPl 3&vnA;C}=D mn־)?aIa_DL??Q'i& :*Sǜ{7%Sr)7oޫT;7-1e!:m)Q[%% Q #^Gg˶ :^p^r+d)@DOݮ@LŐWmQ 'Z섪];xKؿF/7jsMy}Xų|yc"Yw=YhUӞ9Wc+)US$Nj[y.yQŖbhd#9Ђ!\cCr R5ٚ5}qjQU}.Us 5{j Dx󧊆b!)+dTd(8}UJ;hc#Jn.=?2y+i*-l,n)~ivD"#mvAICvX'_OXMII0bz>o8=D [I= wؼZ j5Ԉ+τWfH5֚F/n?Ga'zxMxe[p'[k!&hz75I@!Oz8# VG#WvyXS@tJ-E?ZNպDT" 9͒> {h֥4 } %|9?\ʖls1ϜL=dr~_r (HW(4Ѹ$?="Jj"OFJ25I:vt]9ws,%Bs QmNFg1}!s+mCr'YLg&ܡcG"7?_^~#%zC:$ɉ'M&*&ё6h٭ 4hZ[k"$9 D޳ȩ&Xvd}& >z-AAZyy۰pәa'nx*SZmOgg=.yN6aڈmrae$ IC l, UFYҍ$$c-1sؤټmj<-,7Z`<ʣߤ1F>( 7L֦&03MIM e1Up˗=G ,7]͹1yfVie(bk6jc=m;6CL F3.xuذ͍ gF2^1=f+w(Ր'T>x"L,P"qZ3J5E+j!"/fF닿Nq?v+ )GW%gW~J-lqBMB\KvXf_+kBɕ&U!1G-Ȇkq;ںc^x:F? Ζh LG ao{BqfyQ3MeM7 D".(g&쁠E0BA(,P0`9FisIZ)p^UpBNs nkl΍s|t΍yq:CѨ۹vn|3,qr⸒9~?DU ƥI\I͈]LAqC---ni]8`5P_F0!B.p7&s e\B5 {SO}n5Qf("G Aj+{+ՠAX"㛋Ÿ?M{9FC@EL'H LԫMi!qс 5sX#+=Kzl+lFS-0^H$=8ˡ=hPÁ!O&2P0=K<64t#dpl>Op=8AbA%@$!OGؙq46Qhg4Fb`NF9TծGD7jLx yC\AJ>Kqޢhg4~ށOF?I4e̘a΋Fm`V9eCf%0,=GoYG9V8#gJ12B>iUbW>=[B 7D| S&ի,W r1g4X[,&ٖfT=.q/@$ŚJE1vTZ3@-hk܍c 6:dxPzNV wخoAt0f;оd+TYA',^ܠf1VY_u\1Ĭk%Im;@BPZEi`dq$|{oA=JfHx[o[} s:s!&hBNc,ĒdJL;t&DŮ(DY$j/ .^$Pf)jZu rr(&PC+Dݏo_kБ $ݹ91w&~ҹBE#6^KFP'GGKQ N@F) rtT^RʐM9ڈ쁠4mIAd=Y+ 클=>f}<˵^O7 u@S(z`XzSOe# `Brb 6^!)#{1H֩!ClQȻl5ٵhIauge@1 ;=?w_0a)rݟCJ8e_3-ꅯXUWퟶy u0R1b!<A p.(v Z r09N= 61X.Hز(mmq.BHu] Āh5)\gVobםT`6kj|ˮVm7hX6ܼxOfZ"83TCv/ i{Nk!E!;#V T),kn3Z  cu8jn9 hvr )[g{RF`GDbm<9-knC]k7`H1Z 7'5/Nh oZCaԡNi{yľM>Gl ns} g˓_O':FRʹlf cBg Ɂ&b٫pZshA(@D V݋36*9gyP-,ԽʡHB1<}JV ErF;G"z2ŜQƚQ%#"!mQ+x) ٤dPnu6 5<TW(^cњl Ȃ+rO!޽4EQ"w(wU[kLU J%: rJdRB=V][Nz8B)uͪƲYeUVO zh:iFy!NjeːoGPa` :F5LhTT.ɛZ %c.1D 5[#-ۿ|R(RJtJda"k]gZ Yamu0j}ҩ"䴳ɼ,cJmLyCJ͐D?l:*LW)B;Škm=c\gl}*쭊:ٴ& JS0![`BuڃLG S"Ŀn oUڰS"ꖗ%ew;v|I~IΫCxs{/}a:a巀ﭮ@=^Z(EW?m dқ_G8ݕ.[BAg$-,Idr>0-!IÔHC^dZP.!{ NS1Kr|/c2팵e %I $qC:IY&奘x`L]FhfR1wNCdID˯Ae K̇نLJ ,Ⱦ(:G|Td|G/EJ%R}3XM7)K$ V2Y!aLۺ׬ ژ:^,K:2LcC]+rH8+aei<;(yD(j&Mڲb*RL֬b9d e895L{ZsTڹkn}-.&kMd `b1Y+M1O.~d@"LJ_I:P}+fw)vz`.>3(6&7xmi; jᗛTfs%agӫcyAk[߿Y%U~a5ZG/)yV1Py q nׇ(-~:;ƪڳzݒ-պ&&:`|V{<LV;C1Uka(İxa*̲W)f N뀑٘ *GA1y`e*0U1Rb!]b@2XMHYi}A$#If;Ka%L>|q~c^ >,VǤyoUu+Zԋ^/l4p!bVZ}^^Y IcR2>e5/^wPO;Ym {jyߟr"=UZZA+l-6!㨊ʇ<. lNخPa`f'{:tBke= vZ !ǃw,4,>Y~ٜ,9kQ>Qpcd3Mi{wcrR{V6ݥ+燃dD+w'1q!]r̵=N|8f=I HPc32 `SF;zXĻ$+ O״өyp[;_g#n7BO{!)S\B03ej|~Qd$28Lj;ӞTϽK%!|İ]G֚<_Tz=qLL\A&w;Ei)[hpJ,Sw{YM|"PVvQz=op x vezbM*SO>t֬.n6w55T'hvKE$.ϓ_7ZRMjj&bxǯ?Ix^o۵s`v_n3rWV8NEgǵdN&:YimwŢi>⛗NN4GF)(!{ĀqcYӖgѨW=8͟/nT;L&Zao/Xos KE!;rU  IiDfwqeBM =%\)PCJf('Nn/: %U8vSZFFnoq}Z4:n{5c3ND|ݟkN+菅 ipAݖ"7$hT 5& JƄ^}xX xZ)IAȈ{k 6fR#mx~bdzv9 Vd>gs%e(>k:Ȩvv@gv%OlO93[ܣȯ׫yj兆Nz䫊F,[ocL.֔lcd5UƯ}}Jul˻Lgn|fsO/p6{6Iu&)S^ZƳ/l;q_5 #f M.GI__nl>/D|:p2b3 y aTgM=*1 w;ǣ<,$oƯ#z?߮57PZ=}z}7;4Jo6tfG;rwG'g~RBw-> Ҭ{@t_gX|8nӒ}/Vf-Z㸷>q:?_l+9 ۠E U|n [^@vjDVNw=G[ezKVG-'=nҀIs [%O#svxn4َow?[b8ЪWܬǣ~:鵁c Z}Qxɒa11~M_GoĻ!=6;GoTѪ["2ae8%L1qL[|Y˟v&c QPOC )AB@5`qu2`即}ϐ_~a:Wh|1)[= 1.k@д50r}xQk>Uh&G+~SZx#H#NPӘ1,'oe1Sn@h#o+ș, qqbM"P[L EZƻ 1xԎ\_nJc5̍9 O5nJ28l,Q&,5$N{$i S[%,j؄VqH[a=F"0,MjXDW0p%Wf6ppX"DyoA")[!%i ^3<"gϷϲ,v ed[밳4`HoLB^׎(V%2/'9Tj buPqYVeFQfrO̺N2.zcrX:1 J..~|RӍAR܀,JX$k'<.x/pR1[4 p-=)SILe;@cܧB>[q8Lkki`Czhv%`8J/=F5sM!m ZR͕. (YGF }8cD&bw/37΀5!Ŏ9l`8HYC>;HT Ne fd Y(3d&pѭ1FVv52 T3eO_VyX>6Hz3lrteoOeHKHEj'8IL2s:Ufzro.cN䅄,|Ug-SB3[a+x+!ҝ EA6UzҦum?~Ҡ* 7Ca 7V:([hk5v&#Y=& mcM+}&˩ ye0@p};UGgh!LI{\(I;&~!5!N]F6DK$c]5I$<ܻج-.+{eN(֬~./BWZV.E^N\P> r,rX2~ѠݝCE(ܹbu3=l=?s]iӞWC eiO͚IH˚<_N B95a0݆x˩|PM [{%X?qmxiGWQPMѕ=d||6}r49,wD 0~XVd-á5i}tA2]#̳v谰w9 Pfn.3_3U[7דY tÏ_>Ta(F׽"jdl5rEp x:z3eyft{>9+كtQӕ5`Mtv*\M%B-c`[-.Ei{,;`:S q3Iݬ}ʍ?DF'd|2ݏ|w~?9y9yN@.nhS<[(<GdxcTpRr¨>!'!_>|T;;c5?a Ǚ?j0Zű,8ڣEP^5NqAEfFodz >aݟTnD1t`u2qJ[ά 0w.<܅gӾ߉Ip7.5upKDP=J[Og\ \.TuSȨќevsNIuhՕ3ccGo΍^-ҍIexPA^3uo^fN( l#"&ȿUs~3ޝF/so< 8ϚI2fZ5Zхjw*x: ^Hb{m8߮;LET#\ эg:E ( ͒5HXV̛սWH,)Nw."Ђ M&AdT={;RtÐ" Ze[ta˜:"ܭRuu:PUӉ8:p@+G˳eCLMYLl}/>i %pGtFxVkδ=2m)IRF IN h1ܾuuO iV;XI\$z@HFSi%Wh  Kk$F1‚'tpL*P ޘxq\qm-6YCE YFlϑY26RHJ;ir$q(!2n +bFF\O$ WOJ;ç6a':$m`NZQ$J TEJç%/W  6:>gg% bOzC6E4q&L2s*R@1A%Ia"V1 /j̥`97W۔%1#LVp BA% >ڨ]$5x6ݍ4B.k1Jڭ}L3"xxX]{<_'9q*RL6M",1i6gM3֟v2Y!\ `{×F@nA(Q!XV*Zӌ% Is+ O"J^Fi}vm84VJ B`,OͩׄĤ4@h'A ʭI,wkWW7]ZlVMȴSVF ±N sos":m!%y",$$Cz]*h% ]Z*" *zMb&7XMX~Nӡ۞ TPMYPckRS *X|=OVjªGRCv{=_涫iu4aNaN6T HKRA2GRv= 9 M!b.T&ۨ]jFsA J$[]'ߵт(Xl=Xr ȁm]bd󇩕hq`HoDZK`I^M-/,Ӯ5_iw f{oJH0ؠະ(]_SnЁX1$yH^sibF#J$gZƕb{Fv }vP"~c'e'Nݴe"33vl4l#vx0 YmS(c RaNƉL,MAf%Z6,$d8X6hqJPz_`mºhSZ?gZT1ܸ4R̀`#I tIl,% 0*諒o@?<{ otT@ &8P?4}p7R(q􃆳` ,s>t0||eR079Ubbg(!i(K` +X Մ̂P,$8H$<S|,{ŗ/b1{͆s-؝c_]kⒻYv3 I< OFˏe_y*ۀiK9`"18'΂2<5$F [MZN';!q$+ᲛIhruFA5%kCY:5\u0S}O|J} J3fm14My¬K[S40dL,ɡ= #$'coOK/<)ײӻ[UtWg&OiMòf_$/_̾>Z| <# KN""B8*`.z=0O[?-:O&Mz3)totD[_75 ftc: os &EP#qvS*(pY)rmRd@ mY+DR (d9ɍϝKBOw Q_.`n/`i&)oAm h88@hَ|!IufI\| ޕʮW.#eOpWt`SiM81MА%>,ZhG(2 \( LRK ucw1Y gH,e4hk _c m[A(rsb2!pFps0iծX&kWUk2:/0w߸ځM&J +M-U)giwwyo0Iu/}sJ_lʖi@9[C > c fB!h(8h{/3ڔoG'Exx8Di'I^SL}]]_/1ERɧZUԤTjέu\M8)D+[lIduGWcpJQ}[#b "O8Cw9Frsν~;g6c\"+,KR6bF䩱X"^:_ZǗ|"SER ٱSesJ_Y(TrnjJ,.IcaI$F$1:%~řܲbЯp3jx_h 4V_݁o 12Ҋk.ҁj WC&܉~Ԝ<\FOG[w.fuoj+6Ā]3q/Ї]tPRrq}7EO4dy Of,46G kb:Zixd ǟ_[| )| !XWmA 8U;4@%˽V !&%Lڽ#]Z{hY{ /n=hBLdV'y:dZ#.֮2'YmqRgk!4LiRKRnp&FS[JrQDdmS<{$ -T -TaͅMN?FBKe*_JJ&ⲭ4ƈdm`mPTϨ3DV[t/BW%1-0 g LچlW30f/&kZ5&~x1Y'4v뛮 a<kROV ~pmmj0@U@Bp2]_FnoIa-/o b.GRTp-חV `Zd'/Sc:;ΗyN()6Zpzn{e |Nooʛ!Ib!\쭸?Wg=ł]\T.OMw`5k:PqX.{aM}W> lG~p #Nv/s1=0бI&\9mGzf,tɼwYhd)rzR`'ӓi眝]="`?:?ϯoz'">p ;'П \UO3o\ufqcf_uW^H=ήDž]s{6}뤦/: *KoÑΠۛ}0y+qֽvv-K+zR(~ܠ^Z0 ud|ԩkj0yg'\@S]of{qL$޹q;/ fgyO&"i>y }=uog/;ɏ!:\܌LZq7Ϝ3ߠBXm<_z%Wς~F_&oJ/YO0h'Fp_ڝbl4z6k?N"S?=0MBHߊ8k(V7\X S..Ŝ/>ɷ6u3k$ 埜ޗljM/mkSk;$?H93vZa"^<]BXj6u/E>FYsMT[٧hz8}(u 7d?bH-c>MbK7okH}%%X%%.E2j,ƒl,*d^Vb%P^iYbu$*I2$ClOPvS$ Lo gn,Ir`I>ƒl,$ƒl,ƒhIYtSK'`DHIGRX,kLIYB^cJ.Lɍ̳%Ce6 DTj~vrv`M4A^"b^IjAt 6fal6u0M[Д>|5}iq7Y-V"F!޶IA(@PȪi8ipqm4wm"qm4Ƶq&J ~jŏQA2`_ܨF6귪|lo~~қoHɧ~1VMsPp[BlZ6PMwF˄(l6ƜRy?8\卾RVT⬘krc)ȪkkV| *q> P^9q^eSg^Kg3%,,W+gT^#K%1P*1և&" 'BȘVɡWjrMM3XhZa0e,VHB$RVšUr)ej7 4`fϔ"%WңX&%'e\_ Zb?wӮ!!dՠ:f>mפnmn,jN"1r"Di.V(:v+)aHs@nIM̳%CmD%à\5dcK66G2nlƖllJd&[p>>,L43Q,%1p0-ɉк%mɜydn3`09D9;oҌo[M cJ'XcM8\QR $,qs6eh WD0&VJ)s`?,qQ (рRЉ$LMR4<*(\EsDB[! mB J9mh?3J #ɟ D1eKMbRn< ȯD a].qR HiK Up")MtL4VTiMJͩjX`bCQu@D>N*` R1 :)Nc֠> q5KP1DqU FEK:{_6iS1icJX9H% BMSJʄa1bWq-e˽6sDV?^Eg;ܟ0$νf4݆@ ;wUg٬S,q=AS;^/fw>/ݓ<wݾGp""B8vn g~]Gާ?-:O&MQ3)nowFߦDINj[_&m4T# $gmΐ&F`oo]H ¤:F S1cDLtJQ Wh@]_ qLH LulB8"a\fخkSb7Yk# C69| M8؁^{cA Z^^(+pv a.SQ9ւ{i1o+F?{m㾻}5Q%vR^$LUry[|߃0q1gȑ˽@{I6Jj6r[.|ںȲ"NOp80T!uQ>hXY~260oBBRoW, 6ړKC,|o':c.؄ &SkqrqfNSZFuYrk4Y1=y5feyδlfNxO^hn.M(,cR6 4@@'j5J9]՗oa~KQo@(sl^Az5bRzns$;ro@)"bR \ˀLqb.&i7K:z.PBg'~6E׳?L{H˯tRugĿ[9[~JnH_[hA{zrdߐ57R'tQ1^QY{鶘kQTh5C7޽tԍS4?r0,7 ˠ0'wvKP2m6(,N1N$4l'>|;WwP0\T$ nfS!ao C% e!P{ 498sIXeLqN*ɴ {ݝ8<DPdê$[Qd. ܆H9ˁV- LY ]&}hC6 M#p&♰b蓑o6Q/IxHc(+:sڇ( -jݼi>aUL<(0")]TmDpt1&@&|Nj`(4G|KdR5ф}*G!D+LXaDvN9`j*mډK Y|_hQe `5(4 XR~hp-4@ %5Uh@0` 8Լi%y6QR.DVKu tՀ@p^-iуnWkDa@T*zpH>Rt݇ edq\ŶkLC*f0 e(6ZwD)8o=+vT*2/Ȱ%GgVCTCde#"j IbkL,. YE7|~9"#ٌ`mGIu!}bQjmr>&4nzahT#t/ zAm>8 URl4l者ghIlb=ia(KA-f/=~!‰d˶J"vzd,33r jD%섽dpkmט0cǕҕ|Ggw=VH3/y  >(N`)E)Hk#A#09pn½zaHƷ>Ɨj!XZDU\IRs E:F$%rʼnK~}6Wӄׂ-yZPLSpҒ;rhB :dYq,fr$,80 Ju Kx)= cZf{|eЦ sHC&A߃נQM}PڬH \Bp2G#N6L?H|NLq') /hzxb} `Zs~ X|d;>31iv [fʈo 4 Wyu&P6UolJ\O&﩯buW> \냷C`ae `ZɎ9W?CQ]CU&⟁ @,ͣR(AivtR :м4v5 9^0_;\*~N;P x2!OR˭tԗQG}5?VZ.DzV- VgW5ÿ*Zaל}?{g_ υKUYlUp4p-Ll:p;7fjppzZ{`\J8[p @ Ǝk2d9H+ۇ0N&JK/4g1`hKV)2ymǬ8 35B֜rhŶ,Wɺ`n7ժR0qk>(RX^'^}2h/-xo Y[OݳX<حM:mcӧC}_RޡRLՅpn?p__qDpxfM&A2pvsRmӓKiE~;/+_B|6Sa<`R)N)׻yQ6CNꛐ\]nѲXQk>6C@ZsSQSiJ6\pS>;VbP7n&v⾢;/zPO\T#2ޱ  Qx+/d!N9zk$;׌р(*F] EqV 8 $hxKZ޸żPŻr02]T_"[go£(D/rrj&Ԫ47Ԫz"e>y`Jbu`ɩ=QKQQxK'ӘR(h~PxP~#Y21}"`{` 77N kYxRp@ΤnӁZjqafCq:edJO@к` E3N;Yʠct:;uro`1G">HVf@]P4j^%apnh;/cM wq-@ePgJl`XT e&Hf$WbH.n2T^/u!ңENfBBF0>(Il*|QIL8`T:wp"gOt*Y MqbaA [ 9bhL+Lj=ϓN\gIQDF;0 MTb.Xp/G"?k-Ѳf=ѳLfm!pKĂΒ($C{ ՁC bFǐ}}b`"cDƚ:{.4Ryu&KM͂XJ6 ܗM)D7d=e]}&Xk<:b껹xQYE5eΓ[w= qʿ ߔ.V.}$ry}uŅY0֠ZWk"-lmҧnh)},+n%! +8H̚+6<܁J-O=;G~)I3 *}D 4`IԤmgw>nyMfie5 q5x\~2m?ub!$r'y|#y:/U鈢t}rWy)W&s`69W (A`5J[k񦿢}Vb4Le#>x6fF&L{0 7X 0vQوUw] %(e~gP'C-~\vu.ɩ=rIzPTJ470lv:*(l5#/T0V`I|=,U =X)k֓ }'!12g!ڸٻ6r$Wp{dI3؛`2 fIv"D|\~n٢ŖZNHlYjuX"EɈ<4>CQk- )߳ZkI\/à?}C7푖h9*E8{{c{?ayh /=80,wWj2joo< ZQZb -с rPFi2|-'bhF?O!E%rbPOm(**`ꋰImDGw JA-J4\Z24i*mF)Q]QҤ3\ Os%'a %NIs䑤 J%M/i"6G-&b4LKO4LBgCKc Q@J"pVVB+ pܜ4-H4Q8,M>FiwDCR9XJi[ʢޭRT6ZM!Eq72,#RMw^yFYCaJM6Û:ˍj3ƙ82'JD|f\+&t.BZPp40O@zO~XP*FI1Ϧ$` "q9ZRBJeH 6¸e1v5HVZ~=T|cOt4Yw2˰)i(csj2"$SV*p+#G I}]acN&r,ƞg P[8w>$k]& p&LЫMi!NNQ",RW_NM"gD-N"EvI9PMgEКO!`6IK(Z%jCU$>-YFo=*}.l?DN4gӢEKm ۛƚl.j"Xa_OۃK34VN )!2(dy-sM4p#YQi +iڙ<{N֌ y>5*(m;d})ْR0x0c+r5 uE0\Eoqt}}'ͼwsd94Mue 8˘[1 ψߒb%%olXm=<[ MݢX f6ka:_æ\qy Qao(q߼] +HQ})O4dU1OؑFtLe~7HStϯ#{>_7=df=`& Idf}}_kOq-Ztǣ3dnhѰ!Tb'%E,[>I0quk0e ce޵C4YtѾr?l}A%cSгz5m.Ą4R |EDn=9>ݐ!~z_CV?=^|">2prr)/有RB[9i[JƂEtV[=tG`' .OG% !GI;jߊ2]_vmg~~w6jX;~ {n zhaQqfh|@ }Q ꐠYMFt#-+ C#4=*tsی;3TA;spœ`O6ý4gYk cW*c}8f MX[+ZkartC!^}3i!3+KT$(eVIR_-؎:;e }]g?84h{MIp(bk2yȀހQyw)1UנpfA?,e8h4Y h~(OrVc~mߝP2=aύ̸= 2Pt XqL]ܿX~3.SAUQ nTN1&ՒBJސ澙cmrc`inSDunSbuV8 RS _*d7~VvV|,z0G}kɾ{/Me\{sV?WY7uIW>c7#^#;^ uchVuSɔl&眣gՎp۩,_. JxʁKLL%s23kN ⇷8εI'=Q >= IуNFӃAU8cO+ˑrw+CtқL]+/^DCi2' ʏ 8#GYASǃ@+͢>׵ xlHǠ!kģSvYz MP5@tԽ'PK`̢'(~8ծ$z rpEz )Qs<%O0/wBP3SF LvP7[2 YmMPSs2\Vk>{j"'ўMz×z,Ņy_ٵ /`=<]|_-m7cWF #xwM s~|YaVx#8gn~^9/&J?+0U$7Fȇ2}[ ^-BdAoKx +*>C3hNQtƦuSDuKŠ>uk *}2,hUք|*S賏I6眞uKŠ>uk/ Fq汭[U[hNCgN8wa[*UT'u[{Q)c[dAZ&4W$b?ǹi:p-ZT N3X̺% Zպ5!߸&ga$,["T'[{[PZTA+[4WуuS8xxWa<`@mb{luo:P {!]0.-aٻs`uqEF  %+ .ak3k59*6ɓc-UTw@lxcj bK5pӌ;TQ! wy2ˬ֎Y^:PMZQ5ι Q0K߭4J*5d^WRLQd|@&+b)yyXRWRѾ..s +cFGX}!(S J,Rq_^ fAa-]!)9$έLе|Hv32Pe™7A9MϽlaC}(Pm ZNhwcc$M] 5gdWi[p<*][ޕtY*cgQfK,@~XeyjX/sx5W`52sFAW-J?}~ nX;W?/]Kq~mm<8` +;'Q]&n&n&rfx~V3ooM2)ٍ/nhv@f<ŧ߷m#{`l,<+a8':5#|,{j:VGuf0GrEHTkXu`U+۹ ,I.s#m~!w8eʵaXk.­Jaͽԓ_G9Osac}e Bsӆ5VH땄 (љә5)rK!f&'$T $T7c@.o'B|=͝c9o9#48aM(p"nu@b%'D}F vV 8RoOC!!UCEy@nV>#QdTg *IV┹x<… + ۬gPօ‚DŽRV(wYˠrJ'4mJGEcwe`'~//e"NA"h\t0@lw~Ї?%2fa[RDc}l^e$xgO ~T}Z(OUG(AxW0˒iV kiۉ+nHt@TX>6C |Їaav60F`X1xOi 岘cI8hBO>#BQKJ"j%(7S1QhdIKOӧ ΕGO2g@Lj(wf:jU9U|_nWZL>x3^||iaDhD#*D f^ihxp⠈Mwe=n#I4=Ӭ0]4vhw੢KUd L(GRu###"##Rcw*WW6o6"8W`myg܈tESOs}$4?dl}#:t%' 1? }/q&-JCenxUn?&W^IVQ;)sRe8 SЎz7>j2hTAjsƲ 5iTR{F̓n"#jm}} ʼn_*BVUr0Y?~ZϮ>~~ǫ38+~wU%-Qi8ŠIn^8-/3Ϝ^UR.8PӞ 75=w"Qa OqVU]B8yք `Éu8,\6.'dGwj݁y~Yc`&ށbmuw?hxkWv" *8(eE)s+{.ܯdpn\&duzn6Զ^OCP/'lWWZJ2 M<\Ɉp.%(6Ac}+9nX?S42=q}ڬ/Oaўj#× QXy-#17}漣ntIRNU d&w.iYm505]K/on ʝZK鬐?=da\+CKTr1qCx*b7Lw35/+K[e&Ɨ4L_V۹dWX-eXħUVtÏLhy7_!Z֋YGߛª?4N}'=:v"u@rourY{@*%9U7Be9D@d[̊Q2dZ\rKHܚ S,ZiEGD-r u~`j*۱^8C:^ER}̧dMilӡ? W&H~rez5ۮV:?󗼱ubÝn3=9~:"0>㩏}jGا6D11Hacj%z=-'ZYe-jV&?βxr+{Ӧ.YK]i2pC\8!L'#ps.'IEdњ{\&TZKP]XDqO]H`drqh.Sh[)Y)ΝpwT7Jiseꇔ8Jp:,yJ ԏJEН`F hxۅRjEPM:$Yd>ٷHc]xphZi8MUr8᥆K-9@ ~ܢBgnϚH (0f2A RJ+(hYw(JjR}SyCZm`}Fm=c+O%MM3mG1f2eB__duІ>1Fw Km/VO݉ D"fGs{c(rW;Z99*mQeG>c[:0rT](D -hwYlZ{I՗y҃" Oi|}oY B'lEW_?iQ>.g0 EeGfg5`)ԎuQs 017; MQ6{hU,|J6KcΫ{.U(vpYzV]uW;5Q3))f)QԁW1c# 8rL]eڃ$L);>%pٳ kuA]șy\B*6qO8#d+0>+%`@t1DӸ-LE 1PEv36?&4H=xO󇲈,{ثƺOawww,٦~N23uc! ۷h T$)<@Osȕ3>j^_?nRW`y~'ӳ`[BX?`~5$96 ;wOdep-ׯ@Щ ~,VoIԻx~gf*ޚ~')YvR`I ۳!ϹA._sdžT IP b U yN)v+VRTL9^&0xcvQ U##WW7 ],/9m Z@m@ WJ9X]+μS=&O=Hb 9 ٞKNVY-'ҝ$js($ 'Đ{]7J-sQ,9Z.ѧB|Un,3b07J}zU0=c*h8EBֶ~^Lpg@:Zj鰣*|Dp^ bgz:#wN{)9V'L>< eEq* Mcwcvf݄᳛>,boљ;{0$)AiyU%Z>G)Vy b)GKgun[%r2TmPs<LB.<(RP,{~5!,t]}9ۉRmϕMWJa9  2K%vG6$\,3.6]䑹"6t=;Cϓx> !r/~-|**І&W܍Z$ƫ ZcfPQn>>SSuK& </efݹTsɠIĨǯV+OPGZ7'KY9xyʝ62g<$v!ghKPLd7'xD㪹? $v7a2hSAS=Tol(M'!Su|ZԳllǏ[){9:vfC#E9u}-g-`2&x=w˿kG-sBAg>P9M3~;MoksSJ;L#Js rPFGp8w± )x Բ*1,j&tjrng&Ͻu#R`sڭ`- I^(R_Cb@qG׬C[n/Mo2- `LǷ,S[XGmH[Hl/C2H5*a[Z $ KZ4M_YFm)K5&'WnS.ٍg){{x]N["zs ߂'#&@vym o Үp7 )r]O2W۾h?*=P" :{uk~ARܠ1oJd+Dž,Esw*?pBYN9d&j)>X_{ P~Ágnm@=4rmE$ٸqeWy0b:1żƦdD)pܛ#eU%_vǏLR'?}Nѭ,͸U"^F핡W3\Z6^ ts~*lͲRڡE04Bxo[FA_i! {ark;9f m1i;>T,z%8bX [{Nd)HHK/1ms8ޞ/]0KW gXj<ǐYO:9Wm>S!L#_2-\D;i4Hұ@sp8qKe4/ٞTdQvqG|;Y,k>oO=(#JMEAR"^ubg+h;,U_l Cy<4-"ҏ9q_y[/q38 Zlg?#4ԃ@=C0 Խ;@kVy_NAaD4X`z<ė2 K`DA }NkZ>2>e7X-@3L᭄&OMNRE+"~kQL۹?{9{i'6ǎ0$j='-J-M9CUjDEґnv*EK{P(bٶJ֧dVDd.w iB)\ ,ps;1\2q(Eu(Kʄ Xÿ/0zwƎڽBk8CaNo(Y<{/'N`௷K  2~P~MJIErWe*}#G$($30;K7?L{ W2\6 pE[-0oA)隳>qHeB FCKk0H0%A$ )ޙ^44&+jOeL Ek5m +i(bkPR̂7757T6E ʡv/rn2wxKRk_=SABϡZDdsuVY OmPlVgݲf8(\/:a+J5kDd_gZ۶їLKgεLAU\=: ^!)4D -Ů%!NTa;$YܯMէ Kս Ҟvӛnl8H~l_qa >K/ڡ 'S7|#vӄZ=&.CwԾJ[EAs"!{2"ywG l͜r* Fu1*RlCl$0|ԺPOHʬ"U(G 2dsvp cKE:0cmm=Y5F*,aybϯSG]h0{26c҇R8~j2 RϗBIBij0; 2VrCJ]"N72/n5.=tS("]е,a{'fZ^}3ixm~"z7v}lh4DuϘ 1_W"E+_0{PWV9?ֹ)<z][c-k<=Ce*]X"sVa:'3ƹ'qطg;i}36#v,P (Fp ]\E$Vhm urt~{_QF+zr< @TݯRK?~ɧ:DR)@P\n?wU ^A΢͵kM`Fy{ui>,v;?Fׂ4QHFwtϵj79\ -{ݡ|sL5(xQ<7mF!ֺ7K0;p1Jк3dfIc`\f肑B(^Sȝ{[Y]QreAMFnܜ6pm.ma8W[.8j6Z&R+.j6ºo 1}Wol!hEFbex!Ls]Npyj0^z?fjECNc?U3ITk̟=u3WLrcK:oY>Obן=%ZD)"~8 "I\*ۡR{ Bq-ߛ} Ag<7_ w׊4}nBփ]}@侺n.wX=]P 9QI"ۼ^_mϟu $\ ۹[vRl5k?L51uJzn{{2q_OvuJ+?_ x*?Vi\~ʔPnUOq^O? GYm+?{?Jo{>?E7,Ͼs7\0J* JDHađq Hl-#sY: 3a(I9HK8 HH+P1SF`Hd]|ie XL;J Zc5"ʲ}m]f[j&^mi65%@e WG#4獵ZӱrW:uzSFtO, ՗zTh~zkJә2\>pnoOL/ymY{2vO=  W@$d:_0og{]? d}؟>c0&:U.A?1|iT.NP $F| nshs|b\z0V{ [P?'əcL?igi%-̵2.xV>7h]5}]~TK ] oD6zvE]A/7>t .u>|d_ag4]<-SRӣbCS8dU}F4aLckcj ۑabMqM1ъ0LGI"-G`ʟ;{p9ٹ wt2h?&+&(Q 4]x')dE\}& P諠;NKrSvA_}ezc'ƞ>S9TH CW_y33IZP71kxgfi;˭6,ovlt9'Kح|u8n"x;=٫{}d3d=l/:$؋Y y'P~ C6jr-tU`A0:!x6 0m=\[#GH2ҝ> dM Z5_Geٯt1n;xkǾ  Nē{]xeEr+;͎ JA\PYf<>o1%~Ϗ10Hr|ܷ]·⿅15؜r WpFWXHA2PC((b$"y"G&RAHRD`0a @e27LR }X@0*-{u@e$%)ieظVvf_5'ҝbFrMƬeY Xto*[,@"bfL "Cg$HK,҆(%U8*[X5萐qɅ5A1\`%XH~Mx2iol:usI]Zds[ZӾL6^2! wbkBKnn-8}re´I 㿛g*͟Yhmc@? 8_4s1~[Mc+CFQ&* Ci`~B . ގxJ]2J:ObJL(1PrHb H& s3qGDH^F۔k F1PysZ7BGHpK}O!V_j1l`ae *EIb4b@mދ%MGAAd vI.$8y85+En|QPI2ݧp?2N,S[;LnuG?r=ܤe܄2YK1aj1sgpX2eI-3z]QB얯88ؿ'ں,˚ eUHkEY  ^m9s߻)\6HǷ2BVᏵ ?1e>gj2soLݬBսRgc jb']1sZњ4/5:.m^Xe6|8Ay-] ӕa1NʰyNwL;ԑ؛zbV.2I+𾼴_JH[R\V -W*{}qRqmA*Ca2 Hno\eZAj'# b3uAzX /ª['gS\s"B\("Τ FJf9X$9"eO18CK6lţژ#zbhmW|ufp!`F <4ZFf (mV5|Q ڑ}&"{{t(Ɨ\raD1ED>,htt4(b8<܏Y*u0ʗKSʳt`^1HTv3_:Hh:>6u6Xl&GA WW<лL6&M\jB1Vήܗ#3I_2/tyǐrK]J07u\iucSO~B[&=(9LsPf7\Dk=PՎ>k#'?MLqur]cTǔDbak:~NnBBr=X`5ZIV!"9u+&@^em洕EQTN@ZʽW2P&{ebGQ{_ YRybXՉO wb8'<1'評i2 sƵh{s `õʝ|>WoSq4H|oOT~wsӌRX&0al2"gHfgQ(M5vhI @Vssd}|Roߙi&ɚǤb+00Oց -b4 SL)'I u| [dmfvnÉ2_Ќ|rM&]*)nzqj3o~BjSTM>Q3;W/E _[I%L8yxI2 A#kZ[Q5̺~^ɇBaW{%km_~D:t@ )$PֹfeЌI$hOTnw>B nx"țT_*:&tg_͸|~wt*2X(*!2#ld amvVg+\12(8E5ޚDSR.!rƠ1hg㗏?7'4(4РĂc(DJ1Hվ98 ]j#"/-4qk4{GR wQHNڔ)Ww2_.eiS ݦ"oԩ!Yn nTz.v\ tevN~Pۊ\F 1|-nj(!NnZns"[pt:[R6ށFb-nPx-m']@M?vߦ".9Koj3ϻu6F#0ĎK(C0Lk|-U\hpQ 9@04 H6 9htaBaCI,=ҋ֋K|Q|] @^[\wCEW2¬ NS[*@wBTIB1c)TJ@)9@qL #,[L  Gpc?$V |ef^BMZdQpL|,kw]}T?%[ fq^L}mllo40{ F0w}0G!ZQĄZ2T&˖DQGŠLD$$J 0wg&r!y-A<"MP:⻯q:_1]}nooVg_۽j=;ѧ gHY{?Wgj2~¼gFgښ۸_am/C~qlٸS)Ƣ#2E9/eD`0KtzÏwwjV8OOn 1 M9X'΍R" @E a ҧEuuW5t*BMɘPb=ܾ-0P5Ě׋A]%@J ;QR:yC!La 9_-@zwGmPc(^Զ:%I)Ja*k qwr4\Qo͠(0Kc5`LndhAs#rp6y[;)Y*ќ 9F:.YcZʒRZZL/ M,tRtjguVSr+R;[zo*5'eYm~eD@bkcDI})5NAn ?hG0 2"?C̄p|. ;nx!LX>ẇ~|r!A@`;(՝ >֭PƩMvƿuf5)d= RګLV¨}*upXH|iJ`0Bg*@t>iU9<]9=P)lڪڱO=H@7+P78o-[2u@0b0*筘 [0{H$ן^H(8[5jԙ0bigJrEtDlgW Eup(OY$Q%M'd[ƫjMjϟl޿&ufFMajtaBR >0T j^YE4"CKd03@0㰐2<$Iְ;"鍪Sw$4L4QJe]z|yw(rx\ҏ{e|3tWMvjeBjvru7'nr5?*&=3#ʋ/5Y^ek+VH}z@/s|2 mf+/Yy5G3;ke'Dz"e3݈>e 5GyGhMnEy z 5[ᓥ,#ƗTbo(uF7`&S(ٳG{ US/z׫EӮ߫Wͫ/+U\n} u,|k‡|C\$jʽD Mu+vʱ"Ғa+UFW( aw&Np_?F2P.5b;Dp2h7z恈F<ߞFD@"?n! b)4|'!chTp^_81jU<>+h+_Ւ]ݯus]U5kJr?RK~B`GM|HݰX#rI$t ak\z1N*O ;) =ꗇ6#~Lb"_r gҜ '*l1-ڊKEg*JLF(I.FЖ\_-vWC@ )20+"/e +1+¢ /_ u E"l"`JeC{T-fM{690<|{soysABFʇ85c]dfRdBuG `t^! $|<^(P\hϹFV3 {}\Oɛ; cdd:07!s T u!4FjVh5hK$0-ҚZRc<Q  4ZmV:RDX!JGy ADkn4(A]iI:MMd^.{-pUGPJȁDv5vo8%~RYhEmYK ? "I?y*Ã9g闞7[ }U$ $}k3wݺu@vzx}W2Eh _u0%pzi%}~b!ȯVzgr%ȇYD,K\ f;jatAeDSrPADSqT:xqeHe~(U`GN]!X!O&"n^.GcDM_ Hn 17OAcVK! ΐTBDsa*9G׀b$'s;,#s{.ڛ?a}p$s { ̀_#99)Iez?@|pPc~#GZq9o%@Y ltG)1c5 h8dFK3P f L[ԘK~enYmCdU85PcRBlT5J@Ek^h4cFQ/f&MŲj]M`l ˄30K\9]pjjx@(ZM?IƹO§B@(6jN13°;q}A<SUFwY%rlNO狇YRss0vaU) 9.>2 EiܧrWE:̂54J,!;F5Z`GT!u 2-75lWM_~_ď!ͤQg3P-$ ZBe =LD)X )b7X25JoR9M}")BTtg&H1J.Բrޖ4bg1A<V>dPJ %Fc<^`AyI-'U1%BgH^Z6; ",hUq@')VCwGN7ᬽQY(gwU›ekږku%Biɀ_1M tbP@![5bD|X-Xи)'_~ ?KX&6ݎ5ۋSuFH 9Y >YwMyժpTSJ}- *Mo9U9:%F"}DT?SQ#ڙBwB:2IpOM-3xh"1sQiA eTDlr\z/Z]p_R,fK;2KNFx5ww_OĂPlx[H j;;kvU]d'IhVԩ'ګ~ @JHV ^)̀{&f8payLVLQ%+ګPB@#]XWX+[fcf3tՉEVIČEWgeCA;fCɪloCI֍ E>\+Ё UX_cˌtw6U f}Qdu*` 3*=- R1JVlEZ1CC,i, ;b),}P':Ae~ 8k>@3?# j}r(%p%! 3{kUif!;[\dE{!-kZwE7Oo:V޷gf9\Sן"|(H Y>᧭KhĦO&OM}^^ZogB}ᡂU2>:~;FagnpӃat(, QS᳨Ƶg{I$Ax|0hZ( Uyw>; >v0ߧ:?h/L2oJk3΢O^w2S2v=~Ud[sG0_3)e!͙AȧkA)AfrmC!6D2sRmPmV9ns҂l'#ir"xhxǓCU~RS/T؅,N`v?ynAN9`*?<(P6#{.r1u-A{laLH^w oa|M qVL+x n-nvk]9y )n-V ZAgl3\Un8>4jkSVΑ}WBQ=<v9ds `DO}cD,!<;<3YޞX8:rU.$7wa)9l%pq18_mbo4^MWܫpږ:Jef|JJ`d %ėceJ}N*Dt_=h F9v :2P,ѯzD3hYCA{ ZЛrZR)$I6 z;!4!9)qZi`T /5vB;UR('h|'J\Ө#4VhII 0QLE(~BbʜW 0i4(FktUV#]m]TbS{XV}_8tɝwx~7ڞTUMOnry_wSLՀkn~x73u3YC+;ʟypCEb*e5#^ AEa%c%܊8_eW_%y >u }aV#o<l.' ;d{ Nq3&tm\;R6e,,DC=iaE'Pb{tab ,ثk9 {XdmpflEX${ZnuE],,vHZio%^|@R% - 07O6dOvintmvOQ.I9 ct/%΍W+EI JQJ<$_ c>׋l/S;MğOQ rJqُ~YڹF~gnv^FTOMl^gv+nRI6;3ݼ mI ۤGKK"ܥ}կj'秣s_B|`x^na8m LZX31BZ:149ʾ~Y6S;_͛fHX 9fSc 8_7s#@mRDT"h#PQjYz{l,K=R@8Zղ.6k}q\=l f $mHρƨh4k{z o\,SlZ"`5Z'u[ЦhA`/z._$wFRO.>V_?8*[.㫃ɱq%Fl0/`aY!!<%oV{&Oy%f懘o>l7w>|'tg sR5 0y0;MK:+jG4+*RN-Jn|(X9(`:fIjZ1 }ɠ zSYcQz<}}+Em Y nm3#b{܌ ;.AշՒ0FSme.CM JmJt E2<_v0ޢ^DWŠgN6g심j 60P{Y"Ga ɈuCes|YR\ɳצ`8+K`cܬOo1 P+RB)=Z5k [zW-L|vJ ee;O&z=ۥVɸd`%eWlNe՜ko̝"XӢp%-BK6ԻB(\^7 AO7yTTQQͣj^? L@D66(0y!4r9quF'S9Tt 7+^?_N&U;޻3U0hhz5)7z)LwS_GDZj3Sui֪IhA!KzS%.,B b-l dBڻj/L.Շ.,Bmȑj/|U@ƱJKeE_ XoW-j1U+`v^>Kl7h;\BGPoq ! ]|Zc;Ub۟d统eXvo߱J1[aQv yN~4t Br dzc°(ve']y Zsm]GmD֢J禍8u,H7`(CD\FZ.@K ! h_"@.eJ,WRXzaszps,݈§?c'ZP;*Y}d4lwO&dwaK1Y Oȣ 1Ǥ;@r_')}+\Nu`^}Mj$;I,G)Åmc0[F7C sfɨ%"Ki '.Dh C1"̑rTE]WGe,|;*S>]G}5.b֜ӣx5 7q+PǛ$Z d,ۏo|*C.'G?&Ձ`B@@1ҪusR# ׌EIv:xA%$MkND2V"0 рss*Q)EŨɡB6Csg$3LPDNQ8łBG)G53ZYZ$PU+-PB{cS@4Ѡsc%> ]E[k$d2?E-D@xԢEDfnV"Fr#X'ʄm֖BK 4ŕLI5${"?즳46?H $$Q b姢-(:ZZ$Q"wKAJsն,4piEʱ4Q-2AfcAK0IE,5Mm((n ա.=ƕhڈ2QҒiq(5`Rq'% t9Q8Ȇ'r4"t#e&ORP:7 r"EkL]fpz.3miB^`, i0 zAzG-nlWLӔ!Ժ,8B yH*Zlgl)Jn)lmxvʋKVrų!Ί"Q+zg7лP>6j)٠8D ovKMI$#WVnE%ϫjvꐗQ+C n]unڎ_,~k,<㣘~^oxwcru6%q&|d W0Fhmx/ܐ ,3gg:=3y!=Q-_|\ &/^xe"eaJSm$Oz_IoF]-L%U?.uB7tpҝe{}*09PO[wkjLwtE4H+ޱ"bCZ ͓q<=vHyV~hԄK/^m;*X;ul={>] bcFح-0~l@lˆmѼ|69yXxHZG& Nnf$i2{f/Xk֜f`5>;'a w'Wf?^!;Ov6G[[OlW-Fjf?>N$0 Ҧ尕8Ѻ03`Z: X-Q\NCpJWDXvY\Q$q98Zɷ976dC\%eC|. ZWR851ͪP;?Ep~6:; zߛ8wn*kd[rxg~8ck=Av@z!t{@p}7}θzvpL;,ؾ,>ќs9D#݇\x6r#"]i/̗ "Aner_ȒW=,߯ؒ,b,mUdQwYt{VF_TL$j8qǸ=ACEɛ7Ou1]z"0G©2]x-- x[*G9u1ɓ!ł8: `,|/VB(!XғOkJ-5{>&d; ipr[ǰڶ̢M"+iDڣJ `~vP- AYʘ@:,6"f#CͿBUL[BPƴ"bXaCt:+6T`>plR"!`a)[KBiո:mVrbmx >Ҋf}j@׹&btH֔pc?ѽyr ܩwhՇS]Tt,<¨Ǟ{L#*_hui3`1EQYqCjBRXEֱP$XYtwMZ&ѫ7=噂EYt#e *7~(Ytc*:toHbQ?NA#9| p޼<K{ ]Zǟ1$)1 X$W&.4-).~埋kN~uOsY5^6C7[IO:NM$e&%N{3)K<_ i]/"'oǶLpK銦f`OP2r饶oE]Oys 0v&>4#z D#Q*k$1,gVv/I rzG\s̠ qrϠ3\Y{& c V;./,_"r/{]m$=Ɩ(ADDGlJ@TSkG ÈLy֔虢:TIM7a_oUJ _kJ+^GNVlMǂ ~ qup.YX=n'7K5Y>Ds7^ά-_p1sc[ԟ!B@ X_ * %+A&҆whP|{p>ke|"I?0WɇJ>Ua>̪7M+E$t\X!cZB0Fzꭁ2-WI'i<x7~6ֵFz Ҏ { 1"wA{#ovwFšGMPiFI˫w|]%5U:ȵ qA.R|hLf,eF CZ$ZetiGE*]*I Q'u^,5 [!!anÞ2]Ğ9KޢRQz",Ɓ2K8a# Y:7d,IƆRETw٦)>_s2ǹ\ZJgmz)H_qo&iO&\PE`!yvGyƼ$ AvRQT `g:RGFXm8AWg{^87#<BTD L uFFdeK5>:B8kq^ƥr\} wV{, 0ωXG%H# Hêу3Rt 3&O ]MJXKtiA̝R Xk1^y@rw %N |(8!(*#%U[0"2PT$F1W*p2DZr O7ӏAeI*91jzn(  bn`('% jc4ԦnSqĉ8OoP0hBU5*@^&QT q 2ïsi D.DBY XF)"otLqZƇBQٸ0f0@pXkpkf5|8qJ؛{3?^rWQ9t] $tmKT:R<ّ6(-专 ZiB4#CPu, /AyKZ- }ϧ?#TXz]F{YME>PR j2wR21y˫{fij_mh鳳VL-Z;*:|}k0[OfpUw2]:Y%]!zʜo7V=͠fto0?hFjNri)Iˈvt<رxjXO''+VЩj u5>ipED܃vl?rd퓆X~*UފϏdgu{0N^U2Xa<<^8P?-l)o[Қ+^2w-Y{௓ɦh-$~}M_ŷe15鱗Y ԧ6VN>?:]O7nm>Iu)L_^[6do۳G'q/%D]^\%'kmn0R/0`ۙMVDIq{II-I67RaU}Υ}՟f_՟͛gIJ~<զ76VxA= U}TtuEȇ mu{6-(Uk~o=Kaov?} ҟvWVs mBX1e~t϶WlN-R艔GQBįE_1Z6bHLC諛1(QbJȐPr۞u#yRH!"u O#/kPMojj02v%=u04(,3{)U,I;=heH~?Cq)Ay%c͡Kz/ߌ[57h\Z(2S4\~iO$#{K3&m63*~;+f"\$Rs$'^c)=g|쒯e: )Vll:KY.cVJĬM~(?ۧDr Cdu+ UqcNJȉw9a12L"Azy-%WhȆ¼ISz4A:two[Ɇ"ݱcJqΝTK$+CVvYQ^UY y)od=^V";H^ꘗmTXy:jJ";Ķ*zسɈD3hlmUGOܺꎇDSis2z.Ew;HON4w>\x~ڀT4UX Kg[bSѢ @V'U`$ L[Cc:qzPh n Q#JӊB}n "ܭIm:-*5TغUd$oNtSU8t}k⼎: ?u:(*Rɭꇶ5n^)p`Ȇ.TJ*6YJ}#mlclZvNp)W7/U3 GEW .hz08. (@U}|sG _js]GVf5݉ !&$P֒>56olk^a v-D \  `mpLPYv`cc]8D؉P{5m|aO%6AN o^,Sv@˸6sT&AKqT"oL)Tc2 -[4T RlY!O3pJK"Z9Y Mko_|*[8zt'K}U9׸}_#fK?eF(И0<S$۰(ojPZͫbXS%Ǫ)sB4X{M2DŹ2!(!9hKPQc @_ _/b?X_DNx&9|1r p;"8A\OSz㯙0uXE>e$I0/x,8.jJ1a-"0l[9r"PA_|SIO :LW?jH973pJ >Bx})/%vt;Lc3&q/A XlxLN4QNWx ƫ*#N_n?E'ӰH -}SfV|1+*^<% A)(8, Qɂ#8Ш1՚DkS"8Zm X_-6>$8+$#*Z YpaD Sju:.Yz~ځ DQHT$v9/Wrv[Ge&#ÁgF!&9L,>%5$d fQSk}a*DK`j32"+`NΰPTHvy$Za8`"JQGV0Ljoc獰[#CpL_'=/>\ :B _jU)$W?~߾ ,Ȏ%F #h:= h%4Ly "EEb$>A0[C rǙV{, C#Q :\ʔ榎sm
U/-9s=EۻlT1n~6 c(@c RkM(hAM8Ž~e B^1B;*p6q+'J:b`Yc0yJsU"\˚֏ a >x,"V; DST28c{LKjp %+?X SB& p#(BFF+J<8P58- \bÃP d0 !k16Pn17+z1;/V&hϚR+f1h*[ϚIfQ '3d`F.(2UЫHfR9JK6TSNg i T @9D 1z)[U:+ "%Rpbm,Nx;1"-!ȆbH- &YIT-9ٍznvp;q '':=nt_ْJ]2-/.bgBo{JE6Smc.K\0ى RLrks0.?S`Z/WJJ"[ڛ"'~BJ\ZogfcXDr_@7O0Q|6 F_32J_nց1YMƐ/ a{/6Kdݽ9=EncűQ-D:`4wע$D!b"9V0]>3bW ۹:AYq\>r = 8` >\3Lc*: ~|ƹx>d4Iwa^SO o-CW\ϮyfGz0sˋf?LB\fw1wY d*}t0]>Șə<4+Caw|LHDNBoCJDՆW1ץMΦ&nXExͅW]͠ȚaOgY=u&x5I5kh 986,sԞ%-xW}Bl^^Hk|e]Ǹa"UB3O>39r[)%Ppqʼ,ۊjs2;/u0w#+%#[)F?#>&3$Jg'27GSh=LoemY:LF#Θ5*ӆ,xgPPJzpeQ93&wv6rA,:m-afUO i5E-ÔpLZ^hMʺvy7֩GTViGi}ǯes^S6%O%]YGyhR;}x kgjtQ{Pc޺S{u%y%vNaIFDj!tt\SZykt,Z6yEytKU(og hv7[U@rmX|U`J2phm5 N8'xR͹/,14zK,S`[2PגWL^uCp0LjiVa!)Vh@ *Bu=p@zn.{Fds3KVx))Moqrw3X3@I0 'c7ٝ")?f&"<ʿ؛*Ֆ#)JbW(S⊗P 1bzfSȇX$~GgwswsVb\Ff_^/mB\t@/&̫_6_hZJ;N6:-C:fFM}LTDT~H?K]XHCrݟPw(D,Teq 2wt%'.ˋgw֕*w[^j =TET7M0u*tw уzA}o~QUf~)07:M^uuRѫ~d/]_1oftmÚ GB=e[R/6MtBk*Ys8*>?>ɫ*7s4pnlY=h(Dѽr]ߤOߍ?٢Ѯ.Ak-~w B'6Ʒ&O]ϯk|XrmkyɈ Ԍ*=𱖄=U2*=@f-.gò &H*%QrN/Ό\dW@ZDd[ LCYk)GOF{"Sd"D:lDFv>$:#Vt=ftu: CR6\cpSB 93$ HU[TLaE#QH LHm l\l<}}egjR{9w̆e?\OgggUˬHO-WWj9~W=cq11ob4>wh{Y["`orf)~qcdl΢ qx^HKsV8_J"jFw??]N٫Ǽ5$.T.ƒ@'d4GMs}KC,ԑ8`#Qd+C@< !˩Q/F7Whi:UHGףp{5 U_>&࿊[/QwS[:Z&BIr8ѷ?Dna{V~MC1ʂ8ɂWt- ;QL&߮ ͣY'EAv{钕^-i5[[߂U20# =bTq4NE %F)ZGoe V{O1BT)fpCXDF>r dvx%Wj 'dv?KvVQM'IZ5Ff6O2>yP7ӍyqL`"pgPZـl5="Kc˥cf,Ҡc$QĞ lc e"WB7V+U?!rPp @pE^҆+Fk{G崝Սt^Q"Pɑd~ Ïc{7"˰1x&IML&51nvq3 k#8r*Cx+OH34hasNJ./WYnmsMF/FrQX0VeuY w7ߛ5\X?ǜ}$y&nDc v҄) VsShDjA#ӆ(|xI0:Mƻ0DBfb7yKd"VL8fd PwvC>xx% //=}6_ ʱ yLj_琦ӺŒ 9=vV֪gͲj]U7嵐q;ʁ Ϟ~NzB5o(vq[osFhƔDwR&ZujmPSO^>f87Lhʰ?< wg+2 =3μ)pIfl C Q+3i'Jk,c*Q`T#i'5qQDu%4A6VZ-#>NMЈRR0Z*iqk<vy UXЪ].j->^}zJ1=gH5~kc_/C~nm,ʱӚYnl7c(5"/z*Dg-nB꣸isDpMBc.<a1LJ,>f"ߑ͢=N$<9XK':jt>ia))/%O*w0*z;n 7&S$E`=7:;RGH3DNFDG I%WPq$$_i t2L뗡(ϵ SWEG%׆+#5-ހ|)cҳDҍK7ߛ[:VIq0uR/0[+]՞d+yh& z<r(Ӄ;%(gaTGTpoH?f{t%ZοL_}S/_@oC$ʥU/hۏQMr%QՇӫu 'bAN=yy̠aa9=n-lBI,jӮü0M0CWFNOeJ F'Hq7&&\p7_ eY ъEK'[1q@?ïq"׳G'"A!j"-QЗtWU*0k];3qsM8+~ \WfQu$>\9'mxZ$8Nq,LQpc ozsPAn>c6OVj8# Ф7O0'ˁQBDW)OXFCAEcP( &&B~,ɞYPÅ#D# c ħ֦38~SP9'Sz%h85i7=5ѵ6Jӓ ֙ FDb3JNfGm,A,Pp{:%ǒ 4ܧ0璳L1&E/[X|^e5_\>cLQrY[炓i? d';~ "rJʔlkŐ`,.[3ݑ68tؠ$\< 4v Z<Ԑmdy5Kى)a%҄ Г)Dh^y%C@|{{ DQh̢ 2"kAxk#\9c'sa"GR]:=VF1ӌ MeP4Ith%U!{PAL+Fj86sm ]-DkmE~&wUEypf rC@^8r;z@xz+ݨHn?c3&j↠]\F9y9#5[#VD`W)dIЄyFBhEX˘t& bH$? l4`4VYN5FfIfs0Z*J0*9jO*U=x`wf0Mof0MoA]Ea$ (C0iaPRYl8 xIiBxpgs^FEjtyQ:<|fZC[8M4NnaVʼA "* KJE8m% XjiDHxXfҍXtwL )$KoœpA*,#U2*l/U&sޠB D2ϔM`#YS?_ 7u&t# 1)jnjZMR.R寻?;yXu8|EQy\l.&aϥ;6FQ(v<5l}`zeCI*‘c/Q rQG)}d;bR%$2EY0))H ږk)$'(DB_pY,:ip *sFZQA!gdr6|{6,hYVpHN3#(bdoV/ uqM#A4Hc81'hoծ=\C6G!ؙfXI1ω6 (lliA$A [4pEZɍG{4u.vi Ç+ƒu޴VKk?FYaǿ~K{rO,n\>"Jdi%qw} K%- 6W$`%-=[_r-kJp˯V|خΆ d74IJzf.`>fC@fd+6Tt7ci|6TV#oM&|xmk!:|\ ^BPvgR*x9T2E!)o'& [S+nx=f%8 ۲jI1K- ؓð6.$J8xɩ e'#ٻt7t$TKL(Rܦ"6*^õ5v˚`4؟qs=ǼP[ dBMEI#`5IDY&䔃N\>fB>`QZ0![:pVlDŽݶRL"Y!^6ߴ_.^!pu(b4:5k!Zk hQd8E&F :xpjfaJK$'?KP1wv~]Y7+~5Y<>01̋^j[}UYYY}LJ,Wg% `_wphvևTB-,Chkc>V1Q beeV rE4f3A|2A#؍i8P/ZݲE\?]-?N<"yڇWĮHXG<;/HF_q2+N/ r0L`0̓a 3&ȹ6I]xLOkZԊ WSMRtR|gK& 58Rp|xwz?_te9z}\"7s>^r7FVdq6722V] 綱X_NmPm$UQnxcG'l0;ڜfh#Z*_H)*C@c嘣88Xۗvܚ5 ^+]T>'vZ*hG(qmE<~}݄A*,?ڰvL6rV@1Zo =UsfpϨsAR08!)U,쀜М͞u- YXig/iiiiS#W`<V"Uw$9 Grnq̎ u+ٱUYpxWzƣNFP6)>YiH,!Dd:.':(JL34F.V2)y:*=ׁ+&&\tqd0\`7B?ʆ6:֊O:CMx}n:ʨњ} _&{r C> woOэow9~s7p޴ l[vvL-T _Iξ]cn/c܂Cdlա96f ʼn|c:'ƷB+.ӐQ_Z=DQ|Tb\\/)X3[ t])n0BׁD))ZYdP@$<*̀Ts 5hATRpR5E-fp\~EfgWh+&s_]<-~#o,4ρB}iAlaቄ 8|.)AR8tq?vWЅԧۮRk+'70*zSȮa̻C5=&̙тOޒr檔҅\rh/f YDUC]8\KA."Or_G{9?C  B>}`wOd8hҋHF'!,|VJMy-ݾwd]{ntEʼҽzrEFᏵXApS*ݻJ8 &ZJa{J 6\F+cCڼXG w0z8F)}T;, ҇fȫųKK#Yr4Kz~?ˁY#^g7 E%"|urzZ;'t֢Z{W36* -\5Sɸ ?k8Z!l<9mkOAti:P-pLЕ.cx>h~tB1-{qЊ3@ҕfg것(*I#8 *3߾25!w릟Oߟ {y2'O_@6Yvj=nRn—]@`{l l+d_9TJ8\P%^YJnћ lfCRs8$3Í,M6?5I5UT:U¥Pg1:v߅f#F_\QRC=qϪYv->;+B*"leTW;72{6)ys3=Mϙ`7/W|WL%G׿u|k&C/>D-bф󂔄M:E<[%c∕ʗ=Dy$Nh旨 |qY(=}sqotbdҮ_qwk%9^;z5C"ׁVQCF&uy|ks+ (ߎ7$х-*]hT+[ ftgYEAKb~A1Y4 f2XcpKbiÞc1lRć1Fc@K{+ yveo|߸ P7"16u!1?cO)j qi\\1]=㜽uЍ:>@M+C>0;cL`vd`vx~ GT.U\O%c55Rɚ>9wJ{'*:Tn vf(Bld%+߮>UvK}H웛ː g'Bn14OopzEnͮ G,JK<֬vr!-?֔^ l~k z.\+֣둓<[" '$_I(_lSI5RG~}3yds-Σ7/kJʰwZ^P: 78_K7l]Mm AT>o戵Sp>1׽w/wnp 8lBvȕo3^<1`Xn@J]5sBR{4!*䃥zS/m *iev g}#jڬ|"5)RG8g*%C[ =F_?%`?M:-iYFSs@;olQk_\KGx+'JpхQd q ^/y {y {nFJ$\O">@Q|U58C& 6ԩuVhNffmqx!vh\ `ү5@Pi!:E.E] T<+vh}|qV̊{f8ifѾɺwN( R9Pm!A"bV[!$fK-eCbLp!^{"s՘{Odyr]Ni>t9mlo-'%>4g< @&u$h#0!u (myL,<Ҧl<OP z7 i!t"C^mZ2t]J !?l|wʮ8d1IzY:0A+?2WrG]SV>l_ ٛqmAIIdgF)Q" ;JEvDm+$)H`+HW~4neZO:J~%z39rd(y VaPU,ⴖ@8)(nk I#Vj"ޚG#a cǥ=ɉP&I#W\̡YmF]pvGKQM4\GE4, e] |48e6B mkZmn"9EȺChzu6SYF"+97$;ܠMOVi׽>!dAʬ*`Do:H|"b[smod y-b#uZaڮ~|;@3]=i<&5\kpRiI׉e].ۿ|?\ Z,YBa7c8ofHM&qn9K@qȓZ9-҆wIŕ<:->gygFMH]ioF+,> C03 M LK}V"KZNo5%KejҌ ES<MvЅü5~N%REv(@Īi|v=9(<.uNjLG]TvNp~vPڹdt_[@fL`KC N;@g?X=?U}ן&?)|ni'dt}uKQaL6+ #!@ X(aAX|7d GL&k%0rŋ͞Cn^ Rsu40*V /0|8``~z,b?gן\pߧP2+)N 6DAkaYa1dP'G)\1+. M7(kc*In'Q _\gTy$HIlHgi$=a"i)n ʌz9-"EX+N-LwNq~GuݯP{'R~ԷČ0t#JR&悧hinBJ(f,t+d!Pi9<.`y]~x?b(?Q(F6 e@a*)hə&/Rd  !P1KO߷L; t&C"t */^S5b؏^PnMF^gYYk+EA` 6WAkjg-bkh ~^o&~v|sXwF7w#?tv1onFv_Z0*;lOvR~ Qnbś V:B5& y"%S\\D-2*FpZ0n-ꐐ.Q2ƻMk} -2*Zn-ꐐ.2KFCu1W* No> 3wF}[^ t+{N}s|Uؖ:$Z>{{%_(zb|S!5*ZIP|2bÙi.0f"6LqWu)@>^oj'u(-`{*(wFx>~ jl6Djq^0Ow6߼_BCӰ~/a23ro"@戬_5+}>534hԛ{\{!{e@vTƓL"0$)N2j;{~:A&^Rl,1WT\;x9Pp% Ms6ut]sr\jDSeJ,cj5II&&)uy#O33If9[@@AamCО)da] s*B{{2x>rznMDV5{j24>_˅roVY ʐVyҋDXϣOi(R } E.\Z ?!p>Kh **i *.mMJLSZ7Ӂu <9TKL`R 6/<8դřW&L\a&4Z'n W si$Y,yܽuwp9[f1vO}T};\Q3#}D7GܸlDnm""|gt?Z2+1J9&G[ȋlc\%+*h۷(.FҒQ=EUi0\ۻӶgQ\xN9QPD[,JH?w'P֒A^1#OXe"D;A+ϥP;>_Q IC QҫL.FȅJM-+b$MɸGj1NJ DOO;%A+Zq)ŏ-ڞ34RvϨ zFIrA׊5H D+'H5WHyɑr!IŁN#*u$DZP0cK-BQ0iIf3Al")Җp,AUһ)C:L`BXT&75bPCX!90?Szu޼#t: }-}V N*V(t;A{nJ֕uH)S&K El!ѴGJS`TXh l+kRI_ &H,͉b(Z/k>STL.7KT_;oz cCoc~<"L8IO{u>KYF)LJ?m̰U1mӎTE;_V֢)5+&G2{~1lj'|hXߍ~bdK.FECaY,nIe!*|ˀXAH{Ė;KHv+kr UU'w?9?h/gRI=Bc$XcI#/qL*&b %!_XhȽ`20ofP|}bOIp콮o蠫K)z $QrV)\ cRx2«4ygcyb`^yLF$sdQK`X9lK!ݡ>U!~4R|X cј3Fzb'+Z(/ơ!^PSg,r( bbغ$غ1X66CFeJűLBR2&4M ybȰ@32ͮ|9ov4[FP9 iPbRE@YaPh<)sD1(- #u<&ge0~JxaaV/Gآ16Ri#LHZ$V]eRM7] [r5jX㾗TmyJMT{-+vf9rB[(QætJ$jbtŌխS5Ų(UJ {?H`a|O= 'g*\%W<w| Nڋsj7fHibu{ު! ?_P1v×Vh2N]%PB+Zf00t~|-"e@"NgB[ͧ[M$bm5dTmD,2{S0qst+{_qst:ꁧ6Q1/+is*ݫd-2hcI.Yٸ1#|)!^B~|èqc=R Rh(;Bo&,^XV-`z-Yc̼((%F [ Gmb`0!HbՉ\y.c߹ӱ) <x#fӂQ7}D~cw"3^ QxCy;*aBMeIYW,oU1]Ui6gͨWe[+N{UMR)3jōm9AwZk4O&ǡphUocFcSǢNB56؂=cԐM;y0?s^%)a8%Rb"(JDŒ0ikL20VUhT경8,i~jA},o"3^rrhߛs HRRXP`,V@<ϝ?ҙ_p]_F\Raft' f~O[۰U [۰U-nU742#,EVLjO$FhY*#̴U 1Hʬr4%QMa6?\;BFN'Aםe3|El0ۍ.'K8(|9>U[|ůKJC2$/:?Y__\i|ׅY҅Cf XQ9Ÿ8}WfzLo║2uKrIq)ϴ2;30G(+erq,l&տW V)i2Mh6v׹, I¨*$:M-bpM? ȫ4]z0s7x,d;3?lbƕO|\wg$&O|g4糛~Z~6__]a"{`vz2~:츳yE^j/#4Գc23rob@Vx.~{jDʽ=GMՐa'D&a6ISd64[lAM|5gdCb9ɛ`Rbk!(bJ͚4-Hn Wn\'.,Ǻl|m,JFg8M3F16s2.5s#Y2s9)Z5o ԲO`CG0FTІ Y1j` %inM-KhXL}>QU CT"">_Y!&L8= ք)(F` d̃fszexǼ-m7msEU&$k7,|%9T%Z5BX* тZ`q0(WoI"!IK2.#:V6g29)V6 PZAv&0GX &p5Ψ,t _'ؤ "K ,m* <=Dyj, M%iJAh y)-2R*ll%2,NۣmڏZOvX70Ww|ӂ;O=BHyax~F0lbt> d:[yn.=\pf 0A:}\vmN\ N숗0VaZnKLU{;%\"]D|_Kyɱݾ)ӥ#)n_}Hz3X$:(a>cDBrW)9ը$ u4c8JJ8bJi$`U{zHE"/yz@CZpRϫ55.녰̨ FJΊhK"&w5&BL 2aSlEUd˔SѰ,5W2iK;}0c$f#Ԅ`1aEX饊c֪3nA(<`M ]"QyYOEK`)R|޼.֒B `:)hngZ۸۫#G*vcVn7%[*aù|Eiv>}Se\.]Zݿ>E >ZZf?v[ok.Z4AȝZC<5Wj?Was|Մ{m8T| iQiis vc^y&e]a0}咩WKl2' v=1"n}iފ<ȍB\7Lo#>ҷ A??bp-G= gj. ucp ̘mEÌ]QEkc:poaa:llX-z:VGtZpwͼKk){dqWZp)zˍ !NN=;shM>Md-NGh On3sSbLC3[sj^K12;\:{)a9%84`¾ *| i4{Dԅu>|旔@s-_Q5E*|U9;S_ݝX}]7R`qo?Hvn\wSmn*h ޿TsH q|VE)v1J3ٴHmR1|RiZ&묚( % .:}O).n0|3ZJkUtKM#[~4 4B&O  o/6Uy䐍pzI9LٴK@ja7%m8+W'{K)+zc5 ?||Q&e)6}l3:gmX!(|)&K>1Lzz4?ύ*شۼ?6uyhY|ïc_lLԻΘ~?&<+DŽgՏ #j9lv{+Ѐ1FQrKbb ,XXe=?Txfn_fEqX͛ō.&I8Jӈ[)~H]ΦwgkEˬ)(l-w@ViEYgEĞ{VuF6s&+U>&Í_\H`$ H,3,c! )u }sFN3!ÞCTt7|K xTxmc_j<+o‰"~F~4i8'4z N88mZ5.|1F7wgxqz G hAޝ[$Փ.%iz+>nn 7IBvZ%q)2IiU^3EtL,.d%bDR!\Xͦ!f0h1 >Z#账WX1YVKR" R{Lq# 3̓;L ;BJdv(U,"rsm"sFi)ͼID`Ѕ@q\BljLXmn̈́ygm357i4VM&0fI|V!cR s"Z }rv8Gr9ZQ'c4{@k&[NmjeNCe<".2Pb[UWBp!a)#GVEb3#p%$#*d65W.S__Jcou ]ĆW;"/\ZаY!qVC U#m3'ʢE&6sR[pBNV(hIed je0,F}+5` c/aM5.ȏ,0Ŝwѻ@]hC9O!$G8VjRw 1aF1#Ô d= F'y/03ɑu1+VOg1vL#,Dƽ1K}J58A-'HetAH=9#YvAp1C8BOY"J=: vhK0I~W֒W,Zϝ+UG̴>ƌ$|QvIp)u1\fBWƓRiu4&K\g L0Ǩ8 RGJ3 B"sT2(`vL6]m1SŔ]:lNьh̖L6<8& ׃G[lPk.ņb5̓^VAzgu833CMcZ!z$kfMѨE7߾|AꚞrK8_n^6^qkzQ|Uzu v0KtRdmrr6A`H@ڦ)&U'É 08l;)$^j@:,&fi;5l7V*k'vZy7 o$a%/N\k:v_^}Wy) ಹy,<8&Ɛ8s'IAPmhFxƳqe#$%(OQZ5.Z5W:+n @2L4kXٖ/F~ Fm#V01vh%}%=̺oٰђ{~2k]Y>z_IC; ~ܹGEYW,>a^H|;_vu"H;=ǥv^3?zv.[^ߔU^"}R9N$3;E$J$-?2J5~/3Ɋ,pܓVr){/E;/͔Ro>ܔ^d4zlMT=8Myre]frQF+_Կ^-טs(D;YE4 JB9[|[LW+T(zϗ tCnOZerP<`,$ƹ` EI~vr/Ȉ뻳Z-' u(,|Ӆ@vj\8#]i~0~l0ws3}yA ZՏ);4٧:U Ty$J藳m[Eםa c~0s0i4=4{q`ΰ~]C>U@RN qBI88w_@Ź@%c(o]yj:uG!Iett=[et'1R$ĻiJ ƃWgu1/&n|ƭ%jWn FbuR,u1Iľw`R5%͘r.o/' 3}Ub,V҄['|N8.[}Dr{[-b-ʖqg=ptr7ی^bʨSI!=|>tX{y FP%jJ0T DJ3ivc H7K»:\1)i\čǷFֵeWZ*-UÖb*XekL85SDHI˒%uU-$ ҂jF8b! .7sT9bF,<|z#…`4C?a@@F!ߵm=Ns@B760"|{~P@Nעv3|_D}߷ZWԭ-3RN淤 kk;o ޠQ6s.`% @] GH *`Zi*&4) pD`G@ byǤ.0ɫ*Ԣ`VB) rNv!n+!>z*C@p@AxK 8./@I)ICrDW7d8zn?Rlȳ <%%)I@eHqݛHqr,(@NBbjO+^.{|}[.۱R+JfDU(Sk\RJ(E`)+$8VXi˷ 3+gs._JhYsgjZR$JD`Th]GF֖JWb 'r#Q $3dJ4;hHTƒR*Q<$"MA[b8JN\kGeuF33F5%=p7S?zpQ1DT!,<3*Dj?tH5=R dA*Ż`jx]}3DZUk*'B`1K~{5bj : o=QL.lo,P0X$ꍡ+,%ŕ EXVa XEN+Ƚ\:CxG!z>.m9]^~+ >PK\AX4sSRTZ:p{k'4D;GMaRvobf0)Nb51ίP6`2y+eB:~G¹(w韉SWu[ƄQMNGr]q&xW)4|L7\e|)>.0tqScvq'[Lmk ;UghJ8eB,5KRbu@?ϖN.6Wm},&Po<хv^3_p߄cq ^Lעv~utп4tU\W z%gDpI$6$MKt_ri2Wsd3kxŔ3db/dҍqO3Y}b=$lZ4 S3MVdIW$A$.c,1Ar!8;W8էe 2NB©{#iXQ ~ [ۙ@_BKAu!H,#w$ JVbVM~ 8c['oh IT N W窣_U/*}챁dc J}YS1s&[LړMuxcnIB~e~5ѫYB<9c}?Ojϩ8֘z߯[n~ 7.9}|F֭Ϻ81MM+4#_IO~a-(W?I.12~ǎuU-!.턾ôUĝںŷ nCH.12՞D\'MƐк Dtbźɜdn=c0c3NeBCwQv*w~9X4vjKoF}svL)?z}[_O'_{E^߽-\uׅc,cC`]x'(GkgX "9 OTP `2VYqƺv]%c|YOfڣ8KhtSI8" sDsPB'P ( JС&3cd9[s֬{oֶMp\#gf"_.|NUϑ2WL |lR")vޅ \JeLEy2$HEC1Q\0y-erT_7(¾Y䨜K2Hy`+{6WX(ZF;ݬFeﺱ?iIĪ}!*с>IUI+[vpkՁ p3mO?K0K'd#>+]jn~Å%P.>}TUے5K|~j0z_Y:a"N774!fPZuf/b6XL.fW~"@ʐ^==ل,@ Jv" A4+ )Jxm_|@ZWv!Q\KDKg(&NBDs[k l2`钨5 9Sks9SXvozj:NN'.zo>F/Vϝ:5וzUI7Eq?1՟"N\ŝ}͝e+OAՙ_o$P2G)0=tT j%e}3#CaMuAĊU06f6!Ii9O dmt~:Z>1Np 6B'| ǔdM-OiSwQ$t'?ecXiQ1*#IYViS#Tݿa}\%tݝ^^Ix* \((yCR٪5w`3}aZ!+@{=|tryBp9)Kd5mE J0%c8{{ ZęHdO;Z(.SV˜voURx==-CVp⼡N]M;SPB.0[ӦSZ@YYMj~C>.}58 ^'߮"a@C..&£;.8Mtr}60< t̯g}p~VTa^$uqqV450ɬOC"Vצ 2Jޘ<_mIJ,nb+7ӉZ?uOj L#J aTՒ@\lҋm5I"qAm//1Y)nb$:6Qy 22"NX=kws-C!Ȅ9Z8Z$v1ogjJU`Z񂁔EK"*18{,9]{/etr^arK U iOiL Łw':8d Ggs38!wWBt0P5u*n{ RY1 rC6|{cw6>o/ [L+ ΝaiBX8HpivVkF 3Rw eΎahay?䐓2=g$ Ձ[dZAswԫx70:&|D|%]} g$ ~"F8օn;27۹""n:>=s2GRbFib$%*ɳC!$wZy}2~eq OJS=W6*Q g3=OQj8EOWZF.N{>l~runM4rN-Ժp;(/ 6V8{p$T Y|>9C3Lxě-~b,Y<)Y[|C 6b[ w0 ˉ"b$/.}?`GM }]pM_89?vӲ(@$X|X+; 'nf2xզ2-|F_Fs󘐿hf4׫hkw&ڗwjJNG^ \lcwoGW?Nq1FTRJGf>/ȵu:>['ލf$g~z[DF~GȻkK)efOɳLxS: hƢK*Pg;D7dEvMPQޙ|K~vba(IHE]qBE!mSeCVvqޢSoF/էNh KDl *"tƍ )+-ru9ٸaEPF1 3R]rt]yպ&iW{:7_$ strέ;w/MgmnH!{%$~.WXĵum%)<%Z)V_HJdAqEH== W7u5K TXVDD2jpLeR$i̹`{w8ja lE8{B@%g,c35ϔ8\7ه"0n\U!,MP0x[S]9#<)ea:Y,1s>ܝ= *NqcW!,!q Ċmp{uVCeyYAc"s]e!qU@2ѷq[y.wa&zLo5|) y;#F/5|60.ƨOVz ,༭SNQ{g~Vįzka9nËO'uP4|Hf*v:;n`mFp6B:Nl:Ou)v)83[GyG_p*_ 3kƠ^f$ pR>^P57(uEe?.Q0z ZGE{YQ:|וx2pþSȭ`#Nsa!#.'uBb5$Γ)e+9ѮJ YhFUIJ s*t:8ԩwXx5x2f KD@>sT}X85'XHh3u@bߒj|@srVށ$z9$=()ےw%`&r5:ꤏd$0efYH" *R=cAvemwy%ྼ꤂mE&Ze;Y' -)ZXBⶳ̛ls4iy2sk^eB3,A p))tGbڲ@2yƭHk?O ZEk)P-! CV|]%Fm0,|Sxi*($;@Chk3 ?ДOߊ@fjoRZs9nWi ,,|dDIC  . I$n-LےZZ o?0TIL5m#Z բi >)?u'uҩPf#U]n%z" f /YHΧ.6;$$#9[9g͊M ޫa f]mJZJ(lSkMV2cy (H[RCh)I[Gyyj4%n>>P[FM7 RߕIJ,5L/>AH WR K[{)q%`͔{*XV. Rei jFK2De RXFs]%aU%Hɒ}lr2BT=2saG;T*kwO(wO'ѻ+j|ؼj™ myO [4'.:|O+zO?N ֛oZ(w%࿯m_C QpTS f N;\^/%CB  _ lUtna*8\e覻!mZY@Li]8kl 3y 0HVբ\"?q-8nzv^,4kfS$PܦvMbIeZ4wt}*t[ǪYB@ImߦI=)!X۟~|2-F \L~86[d [Cъ B /oݷ7B W|\J)mUzN~xvc\OAҪɻϦ<P[L3IX$K:Af^0[QeѲHK')ɳ'q^;Mhe6o>z29z.*)%V3N 0p2SR8z3%W}'(e-!Uuo2v]qK?~6⢱mf|'oW9,-5 $4_52_z[W.2oiCJ$&~E`!?%UָFd։lmՐj\/Z;!$n):ʭ8@S)Z+]SlrM1Rh]@lEUcX pWg `XSokuF ]<fs1z}'Z*>πWEP[𝠶1e`V/TN1gnD  "$\ @)u}uu}km`!}NT"*E>|"ʦi;1łO"mFRNhz0>LGC3]՚q/m;Ƴa -.ɶɀ\"iu}40C'3v. D+-]߻O@ -pr gr*G\Ra*<`qv*|[=)*9,q0Kxiy3>񪆕BSI6Il6 |(0rY8sc/Ԍv1ߐ2!$"k7o70O:|=5]_LzIC}pڷeחӢW{? a8],=a|WY"g/wuaP]i!@blZ rwv~< pw:a0>?ͮ*#7\yN <8$ cC@{:S Bq*RlBgW^BML*oNA>FqΘ.32G6Rfz=)Msb̗`΢Ē"GkL,wB1)o zy \M2""wHי`*8MLp0 +P1h&)4X̃EuZfr5E F0 ,ser t#D"H0X3M:+ uѸiYy+;ك'yw)ssy|/4uƣvr~%BuQ'~H@zO}n.3{>T rjneZHd >﹵91a8űŒj:x y9iE3q N'XqaC]pٙJJH#ؓq2 ASSrQ!`ux4%PEGtc.ٴF¹C\wHt=pEh@|ІxLHk,,EDpE+E&ZpS(X ŵ.ZZ\S&|&5USHz}:+Xƭ2gYS^nQ\<@]N 0 cV P$UH19"*`|׺jk]+s9IW,,RkFTn=T U3suЌ4=+^]uRY(U a=!1C {"suGu")18PK Fq%!"ÈH"baSAuTOu^;*t4w, R!^*K{BEBxDB{i*&,Fj.4wT;_]JɱVHU2IaTꩆPZr9N݄DcBd[gxX*$ [p6m,z[/m\ob緢TM NY#,:i^c$k2Gʩa›Ʋ.ZƲ.%ĠXfZVz#@hL8]q`I#at0^`ݸY"4(PI$rX8ńe1 0 QG2ƺ qO=@5zWv-qǧW"oc3[I|-6ݤVsVa)~875yL(wrl]x=!u!j$靀q.à0͟;GY6H8Bɲ` 2ǮJUw?BOvh<PK>2|^=?7weN`'3o^=}÷}fwz{/o`L4Ix_{o~0ӟ⯷y6\ꃿxݫdzλkaF?w?fpi8?wRRI~Aw~c3#]8107!y10 0髙{uA+ 7t|vÅ=e'?pxU鉝s?5xtfrG;۽3`'pyAW!,$V__r]2/S؜?ans _И\ר_Fiwts[~χ/Ri/~9OfA!h擝/As*f8 <ܓ8~DwS3gqkfCH_[9k1nAf 067eD#=>v_#`0 IwyUps{].HO_0qܮg?_$027 tլTij,#9d5@Nm.?R絙<'ypNgr?;+ob??-?{WEJY=MqȌaְyXdq qHjI@}=T"ʣ.Q UVf\s}[zM'Z =.)Na&ce=cC=v^ %~v 8"3z9͛w%HB>!,F*"$yxh nM"Q{AgKO (tJd 2AFC>&*YhϭI9`^@ *되_ *̫0¼ ZZ(N`i%***ઉBW$Pj]p-g`_dZ!Y؟+޳XqzN^'9%!$Pp 9ю(/R|Z#4sJ,z 9$ea=SXOa=c==JD~0/Ԛ{ܯO:> D)>yΠ zkF˫|y<{{cvOWhtz/Km}xzG|Pk)ˇI9P؎bS̋! '"d^̋y2/B2_(+ %tqRdJY*"VU٤YqI|Xy2}j^Ri Z ͿgB?3 S LAuIi9g_kN3粤4 7f0渎dL4EjS$Ӊ14&U/-*jec)3.ef\ޞ|̸2Rf\ʌf\`oi.sv-PgdXK T5C,w{ԥ n@tqSܳ&(ea4Ra #tr ](so8>ZR]; +dyGR!;Pˆnz⸭8NifǕyq`lxsfr JqW$^'8E!yH^@`,ry'y\7\_VSz-L8!Ш`7]'߲[UK @8tsXhn¿e )CU$IR 0P)ZQ.J0 4vg7cBh =$B~ o -=$?Jg)s BkwNkA]By1 {z[(}%,)'"N! ߿06p'2u,v(:wzʿn~ãpOqB,3D&ה{mVX0$2RMY3[܍nۆ4S[9]udf@\GK< \?ϸkep>_^,xZ<~_22V㽼a2k;9ç^] tlp[1Z{?^L?n(;[n:K@9i{w???=p~ɫGY1J+6I ?nLڿEG1:)%7/xzG=AJJP  VWdhraxm|5=_@8G5H.73'3j3N!(#S;BP)++t%'6Uʅt}a(N8)X$\ɥ(KAJc63Bځd| TNb!f, Q NTBF_YR1D]Jb1RZI@m "c"XK :hAiAlʴj҂V@ =$Pvrp/:#9ӛ; c(Ä$YS )YNB+c4c\I2 KBU$l8 3|i!09 ό⦌pD}[0Ԗ[b8|;ƃ+ ;x !b9(5X PR2T&hנ5ʛbB%A)t.A[2͍N'O yCaדPǹq.u @7 @5' |-ffLrlCG1/^Y[~'YqX2(ܗ})E9!RIvK<8&}vbCj.&1[4{"O]rc|IppUeϏeF뫆3c gJFM $-׌rXYLь-Ԓ:攢ͦ*NI^ȪA~ xPԸ's~vԓA2س{2%+<{cq6(~^꜌fH)Vf'1>8/}2c3=٣r 9*ϜwKv _9W }ЅLm/Y9GPT1!*lv9u ^&7R*ZrkUdѱbJ]A%hNi{T`C|{.~uZĤ7@$7JI݇ʎbL&`e|5vL8}ȓmSs}j@mITHYN\F ޵U(^[}OLek9z'\;i;^cB̊ΒxJ˜8}}?vܫt?vDi!SWV˭U!'bhuE|UB=l:-FUXg[ :K߫և^>W/֧: DmCfSFo!OyŝEd!Ju#탻!+3VPN;C މ* & ~5fVb`bjЙsC4{b72 g7 bbXAcvΌ*x UU;&@"@nY)_X$:6 [>60)u=-G+qGr{M~Upkbf՞t;(Y(ZQS#V; [YL$=y TlAϟ#9± 0U"bE;6yN7," c;hrv~آ(>bYd*-9},!6ʆ ֆZjvUKRND3Mִs4&dfU>*wlѠa>^cPmrYZf*ä^Oq*CL$42xɷF4§a^Sڏo+lx0rXQ<[Ԁ@&Ow@GH DT*TqD3.9lsB@j/@Z}2Y*X~2#׉R~;w/ ٌl'7?C>z6$fơw'l# .p!$[]18].v%O1/fkmfUbkN5]PS6# XDju]Cize_! 9oէ+XՖP%qNdF5*1ciX/@s65C47qɌ t<;e ?Lgkę$FowH`tڤKf)feEoRV=B `j܌KcWEi/̾.9%Y/JӴff@)%ߕhkJ!OuuӅcV;4 g7ne k7 Esp+sK~jw_Ux}fxhD"jW?ӻ_nRǿcԁK8|狿^}]u|_WoݧoR.ݧ !RYE.cwOyqSm?GA vw<)IZvEy '޾yTUُ^nG&DhhA%xfHQ-G1Y^[޾][7ͼ .W<#W̅C\lywpV!y\vT"tJ2wZJg85(IM*tts%Еc16*Ճ\,zblU/ 4eyX0qtj;NQ n Nk7v[T܇\NfPjcH;âxzч-G"xn-?kwtQ=w kݫ ۯ=[%pit9`σVjQ: C&rRLP'̡#S *rjIX|x"=O6@e-FDKlhܛ)MJrǎh'Wpx 7fzucrUQ/G_'.G<`=}RLzވVirykf$W`N#m.182~wKuqm 1@2Ĕyo^fR,RH>uJP$ }LB-A %Itl{W/۪8+ [Ӽ&B $Ԛ3t0uvńզg K=!ͱ;`{8ݍƜ]{+ǹ:5+ exc 4AvC5e  {a̍YzrZEqr.W3xj^J_˿ǽɐ ~=42\3Wn3pzbÌvNcd?]T$H/W;sxξ^cO>fYg=dYo껫~E7{IjVS>o͞'V^Г0 F ‹7ǚ橀 Եze,¬Mϐ(H 邹`Ջ>t\z}{ԋo~aػmdWX~<Ƴ[d֙IJEYr$ٞ$~I %P422Fw L<|\X26O[DaoBM⦆l2^hҮ5}q|gedpg՚~rDmFJY69sYoƹǦc37g.|&v17 2 7A!ʿ2ȕ5gbXylswoI~`o=0TL09?ͬdSm0N}o]ZF!yWq0{suPK"߇.Ǽ7jI+e+d=c%~<~<;=yap3/g OP'gSpla3Y->F_P)Qo \n~v78 5!O+ voO\5]!b@Wq̇h~kKAR I#JIX2@$$7fܹT`MRT]M k0B3D" ^W (f gʾ" .?s7y4/Rk*"5N-^kKb 8G$Z"~}rv´{H^Q(HZɨ@X[@hՁYD:(@ t)V'ͳrHMmi`yf 3F&pL `B!֛|{vM/Fb5#o]xyL^MuVAk*r =EB5dBYBDyL6j7ekWRUta=!xQk?sLO6Jު*9!4qkĕ#׻53wlb)rWئDbxrJWW2Y$_a%54Q!RRHR^A:٭if0%;y%f1]e[8v=FR8=5Ȯ%lέ:gVW \[ttrLb_>8q~dFMI=qe}* 󼐸\'5ѹó5v3jL R 2'5껽kRcM1%fxDQjy"ǸZTZ6jSe(DGU@: hhs b ݹW߽[|7_f"<߾'m FQ)YC*Ęc}܇Y~.cBri.}G %XH"(84D"!1{0/)&|}#B"DTқK~BWΞJerEU [xyjpIae:6+iz oIE{?Ɔgf d9Dw~-'8˅{:zpK< ve[J~P;-9JV~Շ 4f_r OGGObRuZ5UUFU@Vew_\%:KsŸFCe^ c35MhֿӿӿӿAZVOP$w0K!0PoPMEބԂ'L'@wj$'> /_thT 2^,S3@1ݒx1ePT0s]AWc*#j*iHu$dI4ScZm"Z upW>Y0 fo]d`/.eCRP^OoYA6eK!pC{:e(;>tJRtIéZM2=S ˠc2 s!!iS%GJ3\gQn PD+ MTHbq~Vj<(Oa0#.3O|^dvx;xp*ۦܔxwL]ȚQ=CԐHg.YIRɇtGݢW }' ΁@U 9מCL@^]|( N}.$#~k('ޔx$z2F RҘtS 5wY$ZY#0 jB0ld9e|}<]T|"1y11Fbu;>fXkp"Լ8ӆ[W֠筯'-[ C(4&ǚ H@(I$m0һ@:N~p[zoV0C ^ݺ#U$h>̶'(øRAWހ4c4EDx2&CX0`]> C[3[,ېМpP*K[T<.r@ eBExo=ߊGՋ|)@YVf4Xl+^ݸAl~Ȯ~+ k Lk `d*LRڲ\w-">HZpA,e=[MCVh >!7B#RIne& d+0IK $cXG MB&;W3N RTpp;[ʗy3õ<%b "&B0  tyPK,$UpXW4'礬1#JΓosv2h)=IC^k-\@L&1 TIEnn;6H3~:N:rt+Fytc 8Z ;fU>`PBD;,/[`fap/i˒1Z1uw.%'ق *X:ď4m#kI&/wcN0'YUţ4d-7 dr|=y?ES\oY?1rIvOvXmX[:o5[=* &S`F2.u}LՁbKLU[*skOn\4~vZO3<ӈ͝Ήq:Gd3Z2+lO(o}(muޜxb/ߪbbk QgF}}- %XZD`tU>BlhE>+w e|{!ގKաڿ:lܑ>~eVs/-\X`nsIdkY5s-$W#OR9`B.yH{%8nP/o=RTcυ2<:CWue_^j3҃;\{v(4υ:I|#X|hlKPT畯JّÛ(΅luWw;:QНku`i-2Ze=abT4XQo>1kԺMչcF6RZ\tF8@sʹD/ (2.>EY:?*+_ǡ2Y4>A߹W ۋG9vZ@yioo>4&/ɃHߟ|XS|' ;$V!o t:Y5e Ѿi!*+!+k|3{>\ # oYz>zvOz60y=(j@ `ڴ]F!PY~ dN.aݨpو9iݽD7!;Ր#m|%_˕NDjTth9,mB?)) uX;1s~z Tf_/Hu0(S\zYy1mԐ@W2hsMhEӌv!ӧxPa1\T{_I+-</=o{Oa))\)?v$ me|  铤Ñ ',xu{!eaσ-|^ `lT*/"uJm'/kQ $}M7?'׿&໫`6y*홍;t/ oV_ Hp|=^ffc*u+"+©;H}&A>b5cC#SB )o%KcA5$S!K,F ! w!69f9x 3)+d8;]*`5:G)ciH&Fy)y]yJhf?b٭ARZ\pYח퀛j62ŅS&IamFk d' -Gnrx>w'(QM>IU|/]]xK jj> pV99}'Zy-}>}R7 j[nbתw*KWwHEo /Dc\zq^ݯX3J&}j&p8,۸֐>/ yDt<.>{ E ^mec*dۨ! 9uu. L+ZDm`]}{s ܸp:g#|:ѩ}xd2oaoa[X=Aӄ N4L,b PB~Pdu"F1JIĘ( z%2x?ZNc=6FG6 E anZhvm3ߚԠOCƒRZ/i7+XY:}Wg^+gX0A[]{9Dvq6 ޽%n_*K!AeF09=iDd״9k -٫ЛōwD[37=9l / ';zs N8nIFpnUDdQ[Q`~63HI)ȾO96ۭT&06\x`VtfSQq~%+#IPX6+⬌FGRM1:7 В5*lAB4lnڃ ټuGQ<gAjHxl0^i/ef%;L`IɧgP9 ep8mFy?pJoJ KXݼ,HwE bQ*d&7-)dIQ(=3.)KV%8YQ\RL5UU# _ǟbbw-1{&|--]r nL$FhyQ&ǁ E,c-@㐩(Ґ`Et\JQ?8^j(43S^@St7W{}4yk[JV<{-7rUt+#@bxa( L4oZڋ\3~Նt HK܈DcՌgb,QcŶȚ({Mt5m/ڊȁ5- eT0% ªz~ƗOW"LH0RXiL%(8Il5G DCQƗ?rh-Q=ܳHňAu銑#+FY]W XDE BPBb1 JK*c&XlXw]`GV fQ cJ,գ),%ӧ+-`Ahy` 3E+^b\@dwsj+ VtE>"ɻwX5}#"8鏸VY*y(u*A&R$V1A0 Hf!3Or;CJc^c1U5?>JjـeJz,l%[RB6¿i#7FnHeGZ7 /&Y"bDQ1H0"G~.PZX@ю YE;9Ɣd1;ؽ7HuQO^@r)9„=*n1aUUUpûtв JNɮO);.$f6_/W N` #y)&X|!2Oŧw'u 7LJ\кtҮ$+lSbM=ǯkoX`%x!,LT"McI48ǽǧNO%$U^Km}FFryEOomḡ(P1Nb)5C(La&ųp9I ߓ`<_\d bwv |>ŭΒ-S_f[m;5kRL*~=h~/U"k'L B#5ޗ}(-6fBMJ<HTDUU*̤*LVY<Dӥrg7 &v.SjG+RUtx/ r=ݝ~g;*VW9\IN1:O??W-'#P^ù_mm {d/yo~x5_U/27&L'|k{nrs3 EB>udsVù3$^Lo'e4Rh,bZ%b"ND")a"") f<P#<4TwT3AAtu:8\j|P ˮ ϣKgR)sU2EǓ AyշœB 0y+*/9ee|M~T5.$49`ʳ 1u]sS{W~ʎܔ);rSv"ҕ(086,QJ`8HXɍ -|F`P?s kW$fn+< Gw(=RcXYsfrĒIiG6=³C2Y#J*L B2XZusj!YCU̫]vZ"k.K5;j;`؊'0^MAܙE0_A|99" \%0IuGZ^y`fhՙd KUƎ?(- Km 4`V|!$y&L’IضZ8ҧ+I>9nPQLGhu8qxq;`vvv\c$2V[7ĵ[JUXj"surfAhZ6E/5x5qieu>^ >{9'4}qN"lڍmseMyic^My9gUsqgqqY?|,pBKh:⒋T;΅4fn02QҝrifLw6U Y =闽cN ܝ#*{%=9ũA ]J;E.Ny]bV؝fd/nXi&OdU3ws-ҙS̢i8\ŝKC#CL| oqEhz\%=X!T" {\b'^H!DԨpƦ`5.NdDz1«˫9W$* xH>%9齧-t;rjZnX+}|Mǡȇ =•RS˙ gUevu~ IE%706]4lu *=30Vɓ\IlM=u鋴\lz.lJK{xPvn#Be3v]Li2ť(FOPxHI|Dzz- (zcerP\5ƻ̪;3,kPHFX!q#q-1C2- 0#I- !4FHG'wѕOEISX[lodNS"nA/9rÏ"AgmJ`E ov_!TAmq8{kĉL2Ԩ@^%gɆgjzN QDuŬfKRj!.:Ͳ¥YDzU-fGK1咖2U#rK*gcUr Xj EjTȂ!kxl?d €"ZkPY!kŴZ`yL4?MmYd/6N8e͍sw y%^?i$#GO 6~v8Rc#}\΋'sy竔Қe77A{\Ue j>5LV\SR.TZ 2\2,]|k^CER^E9q$tj*b[g N]exRr֔(np$Pgz^lb~5qouKOGv]*Qm%ֱVbz$.4}o c `E&uqÝVvaqDcXQ76y p^O=;jofDvm'*x/'w0^*euJdzvuju3Drlt5O̴nz:gbwtK<$ǫ sLw;;x1rMx\xcd Iۀir'! JxH),Jh`AMa`I7][]N 3:)ą:~>6y՟n،lK!#iizގޛhX#'R=ǽyަ@w#?? F a1aLPD8&iP)X0h(K) "jHDGW)+ٯ?s7S@?&өNzE8⃜mu);DK{ıT-k8! ÿIJ ~\`r8JS?XN'`ݿ BP6:IQ[xČR#mY2M Xq+%LPqZK"0i<$ap NJ~s67W<.Ѱ^<{ 1XPV =N>{}'q\F +EO7o_$2BX$@ӀJa0c f"xYqP1&HX)BZ$BXS O#pP uA.@PەG~nnnٹ\9r21٤Dt~.Ns;~.{i|bwpL|/nrKwl{Wo,NvkP7 o\~nOhk8_5X/1x3{ T͂>L5]\&r/4!Si3&QڀQ^<4*s1Hg2w.uVvz|N6E:?ŷ gBPc @7}x3 I9, -a<*-kX#x]8дLƮ%Fa"y+ҍo~HNm243}֮3@?vg;|~Ƭ-og۶W1eīz霟ėf֞ؽNx4UK"JNO-@Jb֋ei,$2C$,;0Z(n6G>F_~ԁ:! H5-Z#Q-1dv2?2TɽJNz*iJh,O;}&\XP|/V`٢t$' e9]Z)n{(tcW %PhLaMw|<$&`s^otvFVJk[joT^9:逆fOGOuE֝x{ONa6M~y^ӣ_AZ 8CmjpRfa=q0upw-W@ ONth`/totnn0r\s0A㼋4|w;"9=$#<ypZYu++>xkS>3 Ѽ }q+;H]k!k_sUzyKF g-tYѕ 3{$6&hzq5 j\]ꅸ6cNԖ4_wQ$(VbE]"uRy.<W?#*9l$Ov^ 'b59+/H',a\&*N?g(ItB$!8Ot0J7dJ(\"< "KPxܢxUbFXkֹq5ԩNB\ 1Z)w~1D5vC*`iQNQINd/kGDc6Xrdžۍc/LJiMCU^R2!e?q T*LaO'>fK0TwepOȗ1 g)9SҘ%!ܯGg9::=9?I2 TeϼyzGLۉq29k>:6ȳA枙1ѳS<4BY{pw3on!IigƼ_QS U~]U:AQqEC"ܜAT-C?La`ǾtEM,$cnTbBq&PД)Q9+ )#i& /#w(sidZ,dyKrV  JI]s\٩$6'EeBiyB01.OsQ$J-sTUʄDKdacڻ̜ yb<D3G/5lq~F&t3fҞc)@0K v Z t%bScGUJ qZQFa#1,j"u4s -pSsq٩tF(ӺIYwA`e(.kg)_UkH0]K{|Yޗr ;s׺cro{ZjΰuG+lkT@:q2N66姲R#wA[<)fdp4 _/X\l8: { v,M.AcsȬY,u@ <2ʀ5ݵvՕTdVrltza?kūWNb Ӟ!JL} *>UAՇCP+߫:jG}>uX>moQSw%A5i Xѓ'P%:WFS\R Q0Am'{xcww,[J-ޗT+k*=/1~aTUȅG0 P%8=FBDr:]%`+cL*[+sHR = y!^,d(x˜"4,ɹCn;6E4<1->8z#Ƨ ߢϋ⻃ÃӃ'ovXDo.>?L1Z@W$oGhdb5qs#\JB/QI}!,qfm; La{m^szL hִR {y;Kt^ 'aRM eA^lgN>'qL\%@njҗ*h<J%H^L>-1+!χ3YyO>8vvaO|g:)Y %߱-lDRlmpQҶYc"gVVbz M{빦a3ߠc8wTE޻w&a'.$k44qObfJ mL1!mX$Lr?L$0P(,BV6rQ #dbPKU*CM_ƾγKl?WU5tk྽6MDн/[Tnuડz׻hp+^ٳҀ"Swֈ{Wύ;ŋ,¾(v0t\׆;o Έw;_9 `nwQ;>1g8´ Ft8귺䁓i:&L(` Ѥ/"0۵dLmt2<ڣu[GwG^GwFʫ~r^|xUfpCn+tyztiM^XW |X F.QpU@ g`/ZeAha/LWf霥;diPWR,^1lrg#xS[P%0"q&İL)eQ5HؚXoN Ij!{93NYμg٤EHlL*x¸N?E W2,MVM"&&M`zJ/}nR" 9'3͛&:u J-KBF/hk\Sj_S)էiꔞ+O*52vO1'qːV."$࣌IMGܯRgsJ,TF% KeMIi /[ Ie ĕ"daC*1,y R ÆaC!uؐ:lȒ ?^:5 † D㐸L%4\kb_V{u*ָd!s%$9^${>8%9+@q@eT2JDeM4Kq?7.rJu8#\^N7l|2+%$JJJ4oݭ˰4yrF{osiP} f(]z]9[9kVy+Iހ9:6ކ3UUt*y&VDQ_̑clscH?P‹<$f'3nj*!ғg{`QE+1RqgZc4? Sd*osº7i+</0(fB sQbcXER BJyz9XnNJb9I}\]7+?m&ClDD9ZORϙQ*דsd2̅XPnTh Ҩ}VtU{W25,JY󜣢z\\.+`4 mQL aLkCoU4MsWY]|c9P(OG5}Hg'Ik| =~3(2 zc q*dCM+ nibnoxfXៀa |1o:~Lɕ$оinڦDsf-yϊki﷿fo$`ϮϞco61d,=>uIsW"vq(9xNU!!k$` L2oL]QAN C"I'phzL~v ;( /Y?GFlguH᳈ƗE1e ^H#&: Έ虒EaD@x^%ħ/B#%8Ea(ߜ'D", 8p44I}ofF7;f 5!8`z Agۦ3HBv¥ޕ6rEI֋/SE%"Q>ק%h8%^ZΰWwU2c<ޓ+3)ؘx+dDN fkSLKe d3%&h+CTi!Xe,I׊awAVIϝ"6^EV_/MїK֗JX8#[#vm//3eW*7LJh0sí!8Լ@RIC.Go٘d4SʸTqnLy,P w/`2De m>=L̓ӓٶ7ue84're |4 :ckh"Ā:=)HNh=cxg ܱ\J5WyF|iZUq*?ݶvk9Pdo}O>$U>)Ngd^bz@`A>odVP[[o$*R j} eŒZF(,EhoK^Z+0D%bb$@c]^p SO $HZX' HB/Ĭbޡ*GɴLˣ $RF{a e=NQPi7EeEi)dIґH{QȵOiLj9D9cSxՊr=F-ޯe 8ql5 $5Xє f}(5%6.!S) m售Յ3*A[hi<(uBeD6ΥbA7(. V +u Ih..fM^-&{jkӨ&͚ X Cxo'*\L.o&o޽d9q{9rrv}J_r%_>Ƈn2Nb Nȓ;*<#'jPVM[B?&?^/q{47cF9Gcen?fg&@ksΰU4=IK)"A_#A |V8yWBrSԯssďޢT=IWojc080v@l%vđ OV0fh )Y:%S PJP۹ecX>%KF3@zx. ,8trO#`,jQKAT,^%K>b,-c꿵mBjf=JnZH'lVR SU3J` ɘ"1'&Ĕ'*48QgL zѣL]kv WE#ZȖZG<.2!IAPEŷd#97/<"JC,E]O`p+|C ʊ$2mg Jn"1FC2aD'\ UgmǓɆ#Rirw:rIjc E22"^Qtu_f5dysLX2ZAIhE.Dn'vAgPh?{GzLx!6r%:D Cʾxl&dWDKCc4D QgE~B4E!Y ҎuCe: UxbCxFkuGLʔ Lh6:) "X|yc!D3YnC7+k2cIilR%&*OЁ3@dQSnN: ef'ƪjy~)U`Uc0~")IE{2 ]RF#(ԑL3lG@ v4, }A!=][DuCpdJ!]b}q HCk6WJ#-w)ŁJMxe9-jJ?7Gȭ\3]=d1"J9FX²SA U͸ā.4DMYh vsCi :6 ;$*5V MJa3+)]63=}Ea+sJ[!eJU];!l+HK!S@WP7I4RDaur%pr3 "xJ!0u2B[ w͠q U+g-}YVN<*R ZDMքZQCDƑ}+ܬqei)λVkDr ]+i ],i8%/9q-IY:<\N|!EEe'OI,i{v܋\L`fo#Tpg-4 J2h6 k\,S-XtUͻsxĤ-0#I(k#nM~ >i՚O]Z>>%}`|dǥ3z(H–%y\ӬОS/q9 Sz3z̬B)AۻoIAj5mWӳvH5;Ny\ @es9S=MVhj 7:wK˯ĝ|PnHӊ!gI/`5+-/}u $GoCOR ntAv~mKh/WkO-ݽ“p+/*qo>hk2?-~yS6̼֟~ T_|њ{N9 js{ <зS'ww{2ڲ!]r[,BԄK?}.g@w>L!$n}J 8"p zRڡ!f/mИ`BcyF)AмvNEc`k;d[u “g 7)%>+n[ĿE)~zX?@$aذ /Ƌ|kXanc:oGp6f|6䏆n78;f[;eO(,%o.}OBaDEv}rgBIns0%g9~uWEo3X|s&a"jM 4j|E}>xnFnێ[؅) @O .ǡri6<%Lb +=0^Ef ӖY)39e[b]ο|(μ%Y}N l/YqH8 8@g]%iͨ#;*l e+\KG'6^tE)vqa1vS%bp$-d+q#GE,*Az;([J{7RR7S%w]mKd`d0+{=XYe|­Yp_}-h I$s9}2@v`~n _^^=z ҄xqhʄ؏Z<\5Tp&WKz]nZVaԇo4Z|tERjιց5ܧ) 9Ʊ(yS$7rmKΨgRpz *z81"aقf!k79^T6e_n}R RiCN !&@(ʎTE]PQ0ҙqLHH] [Vb~w9;/(7[,_zOLPf#AHg4V-E&2>$x1 )of$Ay"<.He?2yjAeݏ죒Rođj +dQ浂*~fg' 3-57{25n؛?bxo!K]N~/w9实2GOCb$(QDcCG<ꐻ޺H"WX'|qHǰT8tvU7qmj60WZ`ZO~آaR[Z D,Stt%,哵|vO'kuYfb@aPg{#,V+c@ $Ub<UTeDd˲2 LQ~Y^ 'Ēt,Z/<sm4>BV{ǓΧ1iǩ?ZZDB\>meH- G`| |iPYoC[ XZBIF_ts8[\y8[.L, 1] =E=Y<_a8QˈBA7#\efvJxT} Ei90j\?g|xwݎoGv>mvMnx4JZ|.jR<ɓ4A(&: )& mfM!AS4JKG (㔒Qsi ZBJ8mN߮ &4ЯG05ϳ?XcOsR6wUkguM?>%z c@U^ثpYTz*2eDuimrVņw -8l6Z]p4SPF8ӲC L(XEAyhMrbdܣtiu&eWi9L0Yk%*`uq)fGN.Kk })[{W {)9PGʰ>KX(~q}N[zg mnȶfG$&_/>w5_xaGϐgsn'^>sL~Qsc'_bo>([YGv1rF׌M[ ?Z0ܰx# 4AYW@ ȁ#M'S'pSGA[Gf2vqtJ -7mh`Om|_1C[LLQu-j_N3j%C}Y? z> ޶~<[ߪ}`9Z֦׽Xt~ݖIԸh}92=&P;Sj%Y>6EOtF=#QloMІZ[!>2vSsd2ADƿyWVT=hSȮp8҈.vcW.6ۏJ)x: CMRY "YcZsPծEJXqAFHɏ٭{fJu̐/1t2O<9fUwVVg& Gl򘉟|oŨ `w0N>!!F  É4V8(Brg3aX'D[ vd7wYk{:خ=(k_zk?6ɡix PL_R]e鼨_p"?qn_h~.w!crk.t FKnճ֫k8.=^SDAJ(ZAIVnps9 _<d Җ u0AS#TL7)AT1Ή@ DA暂TaZ"m5Z6_b.p8R;kXθ;pWDzuJw' ? ѿ.v\v[׮J_f`\ܔM{P !4_twؼi#ZMsOPi}h}GoK܍hm-Vz7UQ C}"t2j:4uI]uITW$L3zEol!DSp_T)4)p񫞊^4$L׷rcG]=k# g^:mX\2Q$(M G"4:G%TDey(ovVN~k~yG {0nt$5N*%V6hgH-H@/%ҳFi):pǟmZ. @DrN Rq(E1)r4z)s|GLNTy)O1֐I v@""hD"Shg@LD*Dn#jV1Pj+L,xFSr`vF@v눿 oAx1DsV1%e`48{NsI$P0J{AllnchQ{**A ڴ~H+IڼqTB Sܸ"/7~q Zo7JP.m?^8/diJD#j*p2u@"'o*Z5"RѴ#ԅkT:B49P(_EލJ% BЭPXd" ~:dwI&ۜ&t8>1V" eP. be# FYqڊ8?ꇡC~ޙTvPlRpLS?3kڸ;pTMd8ɀWf#nօ@UB.1aqDʌ{[(FKWyZ5X oW}Ca} ;+&".]x<þvXx"OC8 Y%`9X?yMJ\Oha3az0P#~g+><G_\Vze?sh@jrBD&A`4UlE(tGd8JV W U7*r:8~M35`VH3 7ZJ˕E\x;N@0͌Uf$-=Y,teu䆽Y%GSkyWArٞ1~%oC]0#z5x=}xf>8rm 2k^r@ףu)ekbsƙQ+y! 嵪]duU8:{o}DU;J:F[~?9mB؎s,P7ۮFjrn[z{.JLU+Uhkoσ#մ_v8?nǷ#w;6O|Ȏ_9k%#_" *;j8 ;tMu I0\C)R[_vNLwk<2s>װ)[.Z c(blġ"ОZM~v9HDQnpY.oT :A!UWzpf-755uk^1kO:^4#>b{08}:EDzK_ĝ-s\.,]vI]qB(s-& sK aFo^ ˺ujyO`đүhGltIruBq@;]ϝ]wmhS*xjl%-VjӹTf X߉WQ^S%#Xhrܟg(Z? VdVK:/BNY\rR 0vZy6ցV)p@ZIz :~| i*a/o-pfED #p%=tİc8%("qDDQ!hbz$C,%I AV]Ol <܃.ݠwf?|Ay8ɆH"췲S6\( --~j(mkU{ۭB!Rew ۦ 4W\E7+Dڦ5 rmJ6Y%33mYϪ'Гݍ5ҧdj{z WXkN@uލ!=Yqdk2 5,˭$8sN?{Ʊ.}C6.slRHYNbV4=3áMEy8]UuuuUuէ`n>Ӂ`p_~Ji&+Zkos`睌[2e*`ɯz-ķ+zĂM8 ʧ0kRԕKu>d9ސ`j0 zO<4J8{[-SJv-@wP p% t'PUwa!'P~DqGVrE|:8aĦֿ{ۣ/o(`K/,{kInW2R^@eW1ޏ^aL fi<198K]Lh4aU)Bq#ge 0~LN^x@ҨJH+橪tB)j,\@5 ::3AB1Hh}c/x A9f- x+k"@nz~x~nPFIk/ ɓCzt<8 ָ$1;qGwހ᤿gRKE^_F8̧ *T6nX$3cQ Ƹp TJUnMH/rPDcJd~zC5 T7EI; 3uUHHIUV^H14NTz8[Uz,R?3; cJSK] TfޙLiXMȔ.n[= \GpH!E2H  Cyf53߲iA&Wgo_2BCE9NW?8+3K]+F)Գw喇‘C/w26POVW7{ q:IGp$9ONq/9&B73/1#Gr߿ٴj|q~_y3x9}VVaL%BSrIGFPrt.S6efg3Ի$\{ĄpipAEx%iBĐH28Dpzջ,LSD):|Ӽkgn~\F^VB^@keDō7cN"s8J%i@W ĢJӇ4Eq\Narуav2y4DaιO1A4&N#]^a˥ 3ZK>Ƥ*(pf1i)2&j Q (<˽; 1XN*Kcp%_Cs[+9Hiڠ[p[ = FHՃLn+B'+S7rIޅL6(, 2?om1jSHeĤ 2Eb.둘! AcipR gB)6Q) iq2`Ѡ@)K*KoR1q\I@F%2^"ZH=䷽L~d'\&eo[_x ʃ ൡڨh F?AULg !Fg/ҁ^୅v TGs ښL'k.{(mv4ECb&7BE ĒIO(j: 7rK>6řc%{$xǪ3kW?JָWwG4ϯI&DjIhG)MEu/#m-*>)pLAS,5;f92V%QjˢoyX{׾l_n ar4mxJEq餙]]u&%a c{acd |3r0QUDž^]tHCHtNkm4=c?l ZРHkP7)b$!q7jgvA5(ᭋ"qԫ4yAcA `!QS +p#Q"3oU+K/A+K> LtDXepe5Qp*72*:0E]Wf^'cWPujp}4(eûc| *W(ts PP}M@)FD|J闙<ڙm/յB #b.0t{{DJ뼂C_7΍r5N߼݀PE0x+ 0tGj*H\o6x[i缣]9!7g~qvRGga1LI OF27=}~a~cT^:ZP%e` Ɍ>25+;B J䘪+<4Le2/yNi`i"< 4b+}Ԉ)!8cqӌ"Hp^rhҁZ`||( fd{Tt1OJl>ؒ]qz_/ξLEُ7'/lA/g|'۫QZL܂|Xd \p7 J[4mB#( ,yh}{;{g?"4kBZ;ٵ\[̃UGƼ;LR96[aAVp.Ov׹c5)6'"')J#,H9%ļ4H\Gd5ETCj#) p\l% CK6 P& Zb+} H@H/tsXNDj=R/%>Cdoy2{B6⠢[ l5JfPXx!R818+u QpL8B.7f1%c`2Ba˴EIE)R1Px^33 \q2MYeVOT ^2:n{J:?ֱL >K^?O{'[$#m 3]*͝/'|8f=b_RUrf_Ibp8~SWj>SvͲGmw`[VOF<,=R|XHO|-),)Z$6u$!_ɔɛڭ^Fڭ)v;>+tHIڭxڭ EL:@WI#CԹˡȤ}:z{SB B}~p*Q%<k^^K ۟$PGhk+v]\F^J6.#YSU5QMpsNv2p E6z].3ЇE̿ &?'Ȗi7fھY&=Y>7˱ &1EdXab{R l5m`::F6\=9nn%@+FtN]Wzl@+1-/>ߖ)Y w { r% " *e 1F,)`8:nlUc]Y٪O,Tño¼Kf= $EB!i}|oC4 i=y )FO#6zŒn~F/܂NhRpmv2Qpm[z*z+8{i> Bsy4 @Iewmx>*wz\ 8%EX$QUiRYJӜ;{Gg)Ar.YH 'X JZP" >Ȭ{JljxdY}ͨߕ:9wlW϶VS{v=I_Kjz}L65|.GDp@(9t؜-7iL0G H ]^aAUE2(riF|߇@`rSMۜ[J\t2RMLhhP1Fb`XS 6IaYRR%5^ hǜ14tukOc7"jgvM4ސ? D0*QngW29F~Uldu*xx !CD0Rñȩ{^V&,yEK7KZ]_'/cq]vɑUYϮhrYͭӫ1ʛFLs:y}.zP6i0N)"0e]5RT4S;%͡F 9rq+mHu~ <\av5])wj"Nvu_:e(蹒O&4fK]q?=5kÜ5e .6=NH>agOdc6E(,b@SdÝ(͎ymUeE1.) .& u ma] [-fWL\Ɯ׮s$ +jWŊC<ҽ|?@=Ū?4Mq-pR̨qMH&T9g-{aֲR<٢IFfm4]R؋guPۭ~x&(~|Kkn|{s׈ i7-=T}a3 ӍyJG|SB l uU/yzB~> w6fuS?mk}*8x_55ƷfQW}X&P@" *DY-ƀ$[Ict}hkf 7OUn=1Gv'r7$V[9EԘ;ӞsZ1-y"b2ļ[?|=ċ1/NZ+>"12KdO.--hLRRo%uBT"C0i^^PoõA*d`쑋ڥƟ f7r#{aÃfCZ!xвbJsK+N j}/j "[+1~1=OvJGTٻ6n,WXzMM5EU~hauN2/ɺpk -iHϔ4)EE4$7t/2;9ι⡴-n.lr(.?x  >k:VɅw&yx5x$qkxnqtv (H-]Y1rU_{٣JG\ת`.blb6;+ai*AD]`?#5oG^앎 ghxx /2bxhәXg?vsmw4BCT~JkƞJX>FJ(IyH$=qF V2#CT -.Qty@`Qy7Ei"l ׍HOcA"u+;Qydz%h˳pB$0^*۾vEyH[PCV:{_hbtu1QB K˛z  GC'v>zWvtCt"N9}U!XLr "CD S*jכ~.aiz3Ё+&pb,0+7vf G{+g+4}桁Wt_<||A>QFJ)U#FO| Ow}"tŨO۾+ Q)k[Krc$z1⯫}斶 *aALzݮ0~{sm~mkLT ih]&t5ʘc>b`qK`C{8i!*p.~Ck]64_yc(LƕB;13 {"Ĥ9̔(0̡hDY@b'/lȵ.MgqP /'Ir!?pH4gp0ŨjbP.o>3]|HwH a>}(D݀Fnp/O|i07nt;X; G~RuЫfv9ի/\{y;+p2UuirsJa[|v}o~$L[6UMz|sgǢ~2[].Ӡ]oV7Q _mu[?f.9'|[| A%-4b4™‚.m5ҴJѺk3wlq- R`nxCH-\{'vESn:kv,Yf6G7kO~w̖&ds'sӃ][0JV}#OO Nӫ_ju[!x{ max =68uk6V+*q=q t$戠ó]j jQkod e:!ĂaA6{E+ ۸}D6p_v-1SwqT8#K+Ɗ3J2Ԙjh +Rb XSRgԣxV -)ݻmH>o!F]tKakv$1cc.1sk#Z֎. \fܦ(3-9ROؒ%NrY:Z̷G7;1NOX3ħZCV2sJ IZuQVB&KB V \(i.Z-/\a-6 Mm5Ç+82&qIF)eG=@`#5; t4G#EpxfiɊm:'z10W<>]nc{R]}b>I[독;R./ʞZ^M۫K-uS!7VZ62Jb|yDeϾ:J 4,+OGss;/5+K%B 05>4xpq&sB,كtJJkf$ 4%^jVtd\ TcQ.Vi cuSd7evԿ_ *X]ϧ,lLtZ¬h,hG>Zuj-IKy 1JƸuK X2S[*3KL2 }Z-'UF}]ut SgmFO#@n_ə cIH6'QAˡ_mLfOM A@kՊ Vr0-Y:@\EcX,r09JNqURHo=/eYl, Uc̞N?2g޼xA Ja~?]~qw|qFcŬtW~eqY|q53-hʼxGL? X%a.ɰX4z~^e* ?| 5vr{Y!fP!X4IڕN=GM/T315 znjXm/ڃM'`]"#f}]}M\wSq-oХw;tI7ݡKv41|SN >T۲'O⭪0߻ϮP! KyC(|!v AH [3`%bLz%eP6!_8nFj?Ťeߢ*18KBmש\6_3Ѓ~~ )V"GS;z{KrR-Sݻ,i1 Nl##TE +T\5ն4^8#A@kb&`'nI փ k`g8X*q:2td2Ӄv12a{\Nq7M)EwUPeX-;In.5.dI r doL+(4FH*hKb0ёu֗8 EA(@(YizAýe%ڎUi[pG:W(w+QYg+=+`–9E0ŚuyM<g\`i)B)CJ?d1ʜ N3@a.oSp5g#5/DeN X! F$0#&n\˫o1}ҷOy#u ;]F.M6ċ; NÎ8gn!TP`~s{ey ^ K-\Z^xcSk2%6˶óca)J{H-15fͱ1"1 D[_mJEP7]dfHoIa@| 1Ohm/okwfT, u1TK6)#nX^IA+xbA Ch"}{Rj07˄~T{5z|gOoș;c'KU*foaHikӚ]ںpn C '5Z9Lv㲓t/Ux qM&ѝYdoa~$d-cĂR6hdMPT%ťu1b4RElXzԜW?LK)SMdiy{O2NHu )TInZ @߯.pokC1,$YVRُ-ÓI6A=Bb'Ӂ! TuXEvJ#]R:Lo`݆{NK$׈POէo+Mxm ?͑JGŪfSSu@.=K?YӇ.:0rHaս#7jsdyſf\wÿ͈Xw O's9sXS4\Vܒ`pݭ,u( e+v&FzZ)ʍ\s/0\s֊=x0Mf]N 2gq=MBH=F,d3:9P&^r\'"myNM#hNBLc"t'VX`g:dᝠ%٢8w/OF3C4ЈO¿//|aY(C_rKNF61m}qH<}6耓zavox) bvH"Xm9~#g8;;%C5Eř ̳:[ S=qr>n\^!?Bd}>q#fm}9/'>Rk(t+|J|?GpYGTk߅ŏwۓX+>W=pum͕B~c݉[Rt'ڍ:c; ג7&FjGbYi^wM! Kox3y+?l/ƍ).?tb/W7/4^)6k8sLl ҖL C 0qXƺWwazkʹ狡f6' ;9Pd׮ b!w%XR9GC Y׾9f?.(B6·]c#%ch]ÎF\9F  3e2w@DJ=I˔, 8h_%s`W5c@ !hNRm/ۘ) !JhR"sKt,e=H_ w&";nonr*3obD6Į,@Rq,[Rf"`nc~zdY2 Ä[aܖ%W 092a\8Rk;{LSchKƚԱJzE:G::ƒp17z:2~i$yZ:rԘQ9hU"0dc ЮL] m\0rtOZ>uoݓh9<&n9QIW-*#RΡ-:*:[u6:&RѭꁱT{[]b:'Pmc-0JJ9aJbq):Y8z>Z)m`,%#vX`,6}Ekm{w[.șF\$%XT)A0{ޝ;T _wo>!$&Nx\RBYO XKQw"YUd}5$VT_``PƪLIŪ+| \@Ԛi7k7cz9?1m+lm.+rl1k>HP$e \D,^?8t8unyQ}AwٲM~sHr5~kux}&vkvnD_리^]y;7J9>tV* pQQfҌ48I 2!OjR&VK#;l}];%]pN-,bDрŴ=s,}')K_ϩvfC,|d&ia,33}jI|1KOr:z+34n4{>^wrdt~jK3P ng66Z9Si-׫woNً;spB4U+ P[JUV\s6!oڹ)/7 DMq޼Q1HW{PI K1Lu2cFkyr. v ռ̐ZaBۻ}YYg%K#me BA[˷cA 1EǤ=XYRxb`}Ҩ`FQ7G#$Ǧd\m/pϫ tDl~j5#yժ77y׷oS*ۻRpd%Nn-^O ݪ| dLh,rh!*Tp\Mȥw7)RpG;_?^]̮z}߻ +Xǖ/d]]+<[`exUvZ> Ui `S)0W]ׇfv5&sb )&SJM^JѰI!s,IFe&@nߑ bA{DB1+Sk\\SbffM9dmLbiN*CBBpoc׆E1bv_PYT碯0٪ ɿZUС`Q;_`2xDBgTvA+qeçҊJyc55L!pS݂p'ffS&G+(9A !?_ ]0̅~?GsBzw::O]ŘaFfnqS뎄 ĭ #eqƁՄFL k.EWU+fPAXāl81rE"F[np3\xb@{|FE&n2d>Lf醼%|wn5 _GBcsة XQEk31((r/(_ tq.@Zpm*Y0ɑuye9Uiyuq9jViTJ {'ύ6rlOga g\8) !UAMH6QMշ\lѳ+6rmFkYtgy2=BT/Cǩ(*Z[X&ϲvij+k@'krb %}0*؛JR fV+7o+:g#!B!RAnA6 e$tZD 8=J`jǢY8E))v"lKSA O̊Y+7w4ϓ̑dC8.|jI( 8rCD uL%jm)"wjHrXŒ`ܪ\l~1imfΦc1bA aNYBg!Old6rK270\A6^(J(J!tg@錮$m\ʶ30  1D[ .# UlpI&P1%"829.aOG܂pH"k2$r(~eXF{Fv&&l<2zv1ӆ`yE=e[㵱z= wCn>!t_on߆+{9˒(W[A*ro2Zӫ}- h6l@\~^cni'  7Qa!ckYWtb7/"*hnѝ8G8[ͣFNs0PLX;#eՓbn(yDd Ɋ6Cb|%&2cTk̊˗1[JfDٷ<ӑ`ϡmpNvKhI VQ[HqU F+a<}/b2&O&t7+6=s(7Ϯm7=D'%_fhg6۞)_:2C:@Rʛ3䇋ݛR8W7Q %?#~Yf5[c_?#.|ȤSĒט-%FY'"%;nbShԗlh\^3SWvib{aW_Sjƕ#C+V&)ٜ, mU|~Vn?&ӧ2}{I!_7WwuJSkhtY{|!@?ʨ΁Kww:u; jr+%R]RއP[vqV iQ  DsV>=ԪqN25@(lGaߏ6R!)ЩV t`ĵ5H$.! \BY`c.h)fMvWKB16Kc o bt"(kT*zm q)(iKj5!")LhL?&AGmбL)!clCJҾVk9ќ,ӵN&Z4!fvS>Q\E_4X_ op jlk*VSWA0u^aDcٷYbsw»Ɠp"ЪIE*'Rig4䧰\)t0J oWh\g(x%ƽzPOKzywl<m# 9jNdRQheUN{UZx mn-#c=fܵ^N*g^-|Za QyےڮFq}΃n;w=QJ-xj:߁t$2 N .LCj9A4,дgЇ݁i ?y L>2w1S Yk744't$֡%/˒H\v2qÌqihWɣ48KeƔ GZ+2JQq/2BTҏcmwٞH?xہ9d>9}F̅떯rͅщhk]_i[t | ~[.KhXKl W? \kInJ`eJh4ݶHA\J:+yGɛ'W^}b* `"j 7\mu+o4A:$6wCk7@ <#r'ðy2 -z B9Gg2'OGqv`ybJj!f/Z>2> 7K|s ˔8Z^\̿AXy9C=,hrA oS#[Dz Bu@e->|K!/=|Kм3h'ęrQ]h,&n)ʻ=+}fʻ^=zx E<|]M_|=1{OwǝPvk>#gZ9p(8,~ً3N\p:KC')R%FAL[iQș-$ W/~ڈqIR[>dz@#ZhҦ"EX4x*Ukl;!SVԽGE9.!%+5^f FΟ=vzt|}s}ڪrߓuVTt@rN>W7.(u)3Prr/37H`N%޺fu"%7tѮdž@@,NIfP~XQ >Vc50yXADyxcϡ0ֲ#hgvd6ٳ88щlHtm]v,LjuhdzEpZO~yNjVlS.{?l$=ȾAyhEOw y¾w hG)սQey{]c| :XKEceל:p.T-)c5:bhF HE3Jt?2ZB~mN~z)2v=fd6֫%RD"Ft+&̏8nfޓ]f鍷fuivke/fJ1| 4"=I>^2iͲվ bwÃ65IX.ԹgI{ܸ+|Yz³Hػx>$ȱ l3=G6(ҨGlQźXU8c16FUqlǒVgu;Sz$MC"*G" G; McŅCףm\9ck#{3kPJFt Nt Dc=tC!  ]BR=':U Ui\%EJsOR9^+lInXY|IÅemZ8Tkyx @|<'v0AYh?H)eLsh^c;2lW2ɗ}(f6GrZh)`3֊WE[ r^1G2 Sc Yi;lxN \^ M<*a3:'P>kYUjL SfZ3D)ü>tdImHVRmnpx_Ą!WijFrh:K}KHoI%NYJy!#=-kq6V&A4qU:u 7fh(Et<ʳh_EtXm쿋4h~n%Gyʇý4`HJ9OSBWbZhdLw LU1qu}ED?i]ZhAejG= ߗ_e00зRRjHK`{v{2./_< bE0QUEY)JADղƆ2e5sΪOi P8x|v_ vgR]wh hB%zv2BE}m^W-kܑolVj~ZTM<[q 9YDTd1(^giTFiXG{tJ H24-b2 ^ي(DM٨!r2s22BRǛOnmRD10ߞ]+ -N↱ٲqK%2I P};idL)4AՕ Qцkˤ̿gr/Gl:+Wl{{GRly2/gq$Opʡ4nS]<S]YWxt>UFAy-.Yͣsh:Z#ڠ TZ _~~ZWD5YBf?-3o. \ i嚳ή|{p*OO>OtF/ʅy\ʅy\7+< ,xj8π$tsBKP&ڒ8tL4t6O.TӥiRk^p QzB |wk;Otk;Om]k>,O j@nkKePEЉ \P.7s.1]30 0)i;}yra7&Az@tDZ"\rpVXXbCw\yrqJi6.TIJ(bEȾrߝEC=3t%\ M 2fxV/)>#Wo=%ij`V+!YO>&%(`-#+'{s6h>Hrk7Wϭ9!'Jk9xB(@"0CtP#ri'T3Q"і5ms1 s F9{P 0&J HF[6N-{_GeN~* :U׾V,:q50M-zܡKbdvzC%6҈w*сkq_6κ+ExC f8ys9r0均a\8f&:Rȹc"MxVE|+|dS`1W-LVzfl(kk&W zTH`FdT;JgE !"M[i`oh )9EvFAKz}@m6m# GwTNݼۅhqK <+.P|M~ڒ4 Wj[TujQ㮭kܥ-mw_iK9lM[=c 7PjE!&8Ѵ]&P65{We5!`F"Ha{,ny'vL-^5nzk[Ňxy:l|6߅ |p~(Je{.0RհP'm6eBH m&:>muB,3Rl}FѩFtV'xRXmH3뿷p*b(fQ;+FК: ך'0eva,m3E1>VA xF7}'N 6!b4א #_J»<\dɭ]{X}x4~"\7BY]]!Pȩ$FЎ7@5|G̤4 g|A% sp'P)=Pdf/0y FkvP3O 5P3O 5fBMX\]@6pkIQ u*#]mp%Z-VPQ/f`R+sr*KR޴F|-8?uj@2@u8# Fb:hOt"5>aTY9GE66Bnh)D%ZyWRq8IxܣAe[ŕ|w %L` WCMch !m8eH;L# N)\OHswݙ'wgܝyiiTX P9 [$H ha-&}.2蔗*fi iҜ򍖧^OoBJNv ¦lE.Ӈ[כËfVh+_ ?0!*-kL庡["ecӟ|?Ѳ~Yu\-PhHtnw弇9b Ng?߼;[>nAV/=' HT+24 V:nztbn)cW 5a$_VJR->T&F  9b<é;J4:xv;p%-l`FX`~M `4:Hj/"9XJ-z;O7>N"*GZ^l"}/WFmN5Al^ZgA^'^qP`^qw#fzaˀuV ' Mw\G$"OjB'$wp)"7ўX hy9YbiN/FZvpn99&;C $R"JNPF:r{oM,8bLR5ETn2ϩ б6R£0  ׵YtA`,=s2J]뼔]ʭE$:P,> NwX]D'4!Hhlh#ʁܠQB$F̺!_~$V:u= $itF?$:tz76ty3KMFsrz78+pD19hDV^U(h`q(R/Q iBgHU:dak㯄q%HLkpiDr4ʠB:thUB DsqOH!kˆ.VӋ1f职}4ud!'o-xb¹xHibe0d u4ְWN5TYW@šv  ءUR9F}*#yKe=߼);?cL 2g旻ޠ#̺2}*MPSpM\ " ^TBy j(64QpHYikR,E?Շ~YjY VBL̋"6c5m:JLWh݆EK!i-O6ˤ ;EU&T0`[<ûR1cFbpG}86#\I"%UQҚ?ѥ~ǽ뤰/tWv[|:7IQՙy)=$zx$dH%ݙyi$zx$4Zod3Y==JF/LR05Ew)zzt gʈ(x(GW@iTԇ%#7n$R ͻkyFkyF7њ-4kK [B5~GYc5% DF&tQ[mYPQ ]6!`hbW%?vS#\qRyG#xIHQ.`*k2v{*ouES gu;ђmsvNq/CaㅫRUÿ́n'BJ#^3 ܒٻqdWJx/~Xٗ`OhP'q2_J$r,ǔD\(rbX)mfY^6g boִZ8R;Yg sEBM!WݽEh:-*$ IbLH1P){Qd@}i\KipQޯG2Rw>d}~pD hoFOcFԠc}ӽQ`\Wb!Uel8Ų<G.oq!]^L:\z9lqo3w:~ՓEϳ'W'^7,6).=.\/./) vfygޜ-'VUPZ1*Izj]x5~[ XeXl3ϷT߂-j^j TRA-E XiiRdpj2SScawdbyԴBH&K_Es<:ɁFY}Q=%ʊs A3}Z %>c ;*=2h|G^ ~t-`0ch0#,YD;êeppz="  Ni~QբO@['~78G|zޘB 5 "K{;~X/qnqlzY/Fj"hnD +׭|Tp`֛h5 $ 1,.:-R:MAy!ʀb撹"KRrB,ruHS)j,j3sTK^j֋(v<-C9ʱUלLV1,jSHd2<8Fui?$8^iu'}wjerrU;T 6KsmKր2KE˘d6\gº @tE+^ .VJ&s|W94ۓOʎ_7؟+# /e`$e,^mD]׷ūiҹRr ř[ˋ\L\Np\6#0LCCVc 3; r N΢-(;'])p!h.p^Gc̡/=T\\p#j+}E vJHʴjꮣ683nJc: CCUⶭ~ĸ,;Ų#.PbtoT%n(qG+jW`h1W*n0B"MUgnK5!B7kuӞ/ׄJvqS,WyWV R~&3ug< Rūp[ Ltǚ2"yYj֮&g٪k)//{55[~GI6Hϔ>1N}~:VGHjՎC{ʉTvRqF~wwOM}@SR_K3cB]Vw\W7ݗ[wr~?z@j~j7Pj(:$E~9vv#JD :@˯D3H#jPT"K=PdH@C٧РʝvD :DP7¯+X3J@GJ E*Wu]MߠKːt}.:5ơ=|2ƥa셰3epcLpc5$u(cy䳾OZwgwiyu;y|#s$<[PʳJWPO?( ?vo[rgr[yPpw?J[)λcrhb5CΔOC}(LLj1 Q]/*e஁Bc8|y+m\>}/oB!cFwӽY/Fb  DҫkV8 / M+=[ROL*U &Σ28-(7M@:+Uy7-O،c̐Fd8cDD88P>dtNHCg/pfrQQ|?7C{*T()c٘V.eF0 #l$9v.1(ayW&ARs&吠-Ij=EҌJ|ƖCVzcjI ** b=< m|+U:=-OKF58zF+4r7_i^=Y$+2tԖLܕs32?{TW`r3@i_#pՔr]_Z3VTۯBP W"xS Ift&*c[PCOqz%Pӂy.-ͯQLHs)I$Id<,x{4yVgOyrn1E~ouɩ 伷eCn ʗ5UU^t,!W.ǽy,5ro/,|ei\n>ؚNVTiXOVE^`B1A@B)~-b5?߂ݘqS0vd4Oc_)r|:ʆ&4%:%[QVaRE}#z'>퓣pnF'w ݧQ(nE T7|'df)QU!DNi.u;&EFNMrNIP+FIFNi#l9a^="EԘ%uipy"@9y53soTX0Z>/޲HrP$$Ֆsq45,}srp}2!o2 +gL"]n_~Kك#WN^o g/w JTj'IsaENAOqo%oHC>1*Df?1 GϞ޸uo(+J' Кep,HS>HŃJakm8\(|:ުADqq:jFЬutpU=9BqXw$j'Ml,q?܌ & *n|rP @naǣ JK!ZQah.&)Ѐ#$Q_ YADyQ!0N141gj,Fk.*q3P:~eF8#ċ1~*0l)KhV-%݌_!4sJ@6k9=kI]Q *ٞqnqGz{_G5D(SwFz%/oaj?Mg`iapsJ̀uޒ+G9s U]]y$9J# Z\ջ7]]PW4 dy^ɧ*{ݔ4Znkct1?|Ph߯ FT;P6,lO|6̨ctWtX@KQjop}`R)E/]M$ԌDqӿkWwҲvIh (|:m9kU7ڋjH 7^ cLAG/&Rjm8v8'B ϤHPpf HbS߾RT,k-GXo͚yw=9lV;@/j_?=FXs-*wero~|~>Ly29` 'ߋ11bvmKC:Cٻ6$WΖ:3#Ozj{=y%lo$eI%GRUEV>D2EDƑĝ> dF _M4*^}@#c#V)tr$f!8-jNھQF+q7/|~ދ=kא!U_|]Wv\Q[^/s{;H}5\@`kҟ`LXo5+k Fr94.bx٪'a4J| 5kq7yzvg۩{ ޔ01[ -^oHWώh9lzshJ*ác1vn{]$M arYC@{JNgk*Χ׏}=j^Ntf5fkw6sL= *jZGQg*.# >EܫƋ%9"ӻVIoSI:; كlogGXj羹J"%}U>iA;j ד-dqxsB'AwP٧ ΍$̦WHdAֺw<'i8V$SQY#m /Ӛ1ϑw1.xmnqgy}KXb2YONYiifQMWϽ*Dܫջ^TYt/{ lMK6}C% BbiUʧ`-j}Ъ^[R\9ۑY<}+:*̀]}1glKh%ʰb=4N0cA9ELUn.P7IrEj7AtٯZH:Hj+7PC1"ںU?Rr%޾tDZ%}7 $ȚF=T؝+cV..$z͘^h{Dž.wt2Mm憤r'>|)g-ubNƊ *Kh2h!\Պ5NkL*Yj)_38 %hy]9|˅+M7F+<ڂ%u'8sE#jٗ.>žOV.):ZI}O<3O]dzr&(f{Ir`Z?Oo1eezw!#;Z;z~>XH ӻRdW#,‹Rm?gEEb4*ҝ|;vy,N?iuZ~?|^'̺dMbrri8490~ǘ1#]Xjqٝ<Ҩm)J`1]+AJNP{3`yP:.Fk6rtͅ1űFU)n0$6|Bce$3;ո[pEnrAft6c}G#]rd,]VpcF^pʑ,X˸+ًCo< 6W'xbZy uXȁnFB sF| 2 >&ζK!cyB5>=1__]_gs\:D)׹;Lwayܖ1ӗbJV+c8IiFu\w0&6,qA(܍܌GД(!7i,Rthg>r,g>r㬞{`15L1MH=PsaN1όe-{Mj6]̉dOU!JByG 2ٖב3c!t(s+J,9h#QEǒtDyFH zr_ ,`)h_P-DZ.:@W<`,W{_GOn< >rnWj{ \In{J#t[Z<2_ lP,G5ڡ%eJ{'s>RēJJ84`B4M3ۃe2Ac@pG* JWmotYTQ +F-ʜ41H$/)u"F"-#&)NlJ'rw-&*O^E#ϒb" QU NqF:p5iپ'<Š$5@CWq|崥hYu7#\&P:=ZGCP[-7"v,@q&iN/tS DdmT:dE@ 1"@Zuʵ(<^TT E6dZ.S;mqx!g` Z)?Q%#TK&FT҈EKEGG> &,7Bt.M=J/$,&qv%SUV5PV ]JSDo-y z\Kt,qq\ Y`F)Z'閧hDdƠ@5r4zH*6CPP!谡E&Z^9](9h ڣ4]*ۣpmZo 4[)[&GM*n*&}Ґ Zp~6Udѡ#kb1pPU=+Ä@ݔ\3yvQsՊdJX^U1DrcmOֶ3蝇"Xc ҦwԔ-VCk",: qAOtBq@0N1ʽ1QW*Em2CVDDG2L`l[43IX  a<&!IhVû 8p{T=iD`VkhD"IKA., 8Met"@!T&J~qK0')Xm294sG Zx"r BE.Cj~`{2x\,ye'fտ/Y zKeo,t]YH&k i}N4y5P]mcn ZtcQΜU)u{Gwf=<=9ڀ_j6Q&vjon?ycF19B`$\L^1?-4v߻R(ٓ]tKM- N6݀HkQo?6t6p̐2A })|(;97sjٗmfFZ{WISߓ(zaJ{M\,A";OD/rV^ewėQͬiO_A|} D{m?4Je.ځ9f@eFlL2c:m3:b;{=AGl҂h)#]egF%zIF3EG8|(H.l 8xpy6]&-Hr7#-R1V)Ñ1|kiyqCbR5cXKUSH3Ccu{_ዛ5\`@:Hɗ %0%w\ 6CnnYLrf_yQQ=R|+//iN/lp䲀P9tt(#%WA/ 1|<_c+wzIg*'p$NXN7W+ָw6 A%+YnK֝2gm"r=vxoMW]TyZ(cqcۚ;\?cԔW>F ٖ+"= CL!Z9۲H*yéOڮe +FQ%>+$K%-vB!pFZ7nCV{\-=ϟg/w俠و+"txٿFCf-rc$@ lC2D(wHA+?}lB&֛[zA3y!tӦz-[ȕ17ބD+!Uxr*9 Atުո橗X௞w&*ܨT B~7Xy-+&B\[JľrJQ!A kHƌeF{q2Cv(*r{>{q`e 8&GZ!D݈b2շ}[xK f &PJe/zGFXѪ>}(S+qT&E׿: js&Ku;/=_e8D; C:}#I(Tz \ <%j~0~]jXn&5J3 1C:1($eD I.èu˭Fuh:ah|vds;t~߉:s8Y7 ӌ~sÓCT_O߿?;XwF>=G],sG _mon4XMo2Mӻ1^.h4: NjW/&]*}]r~yWK "Z uHK_+*2:>0ŤVw *L|bհDSAB{5WB¦\X%Åtuu@ZiUbV^͕XGA3B,U5#:0Fae[hȟ"f3Wx_Dc 倯9ϻ~0ǘ4LS(kIsG/8Fd˺ bSxǤ؈8.e\,!tps B!b=.l> `{ltasuʞpFn"s0\sW$[9Rj_ѝ-[ԩ1`~ZCߏ c̏?"hXp& Ai;\G?=DD&^x~sfŠ aY WE;P*ݦ=K 4\ L}gȱ83ĝk* zrv!2o&SQ76L YջQAJfxL3Z&;Ѹ|&b9 Gf̝CT7ZusMx7I߅6R/C"ضnNQZ;@mX$JW/+o-Wn,_"H2:'JD |]d㋽5ABF_ tdeKjW,#?9Xc) +]j'0d6 >֓lV/|G}fs ;2A?q 4l@3 PSL7/s n=|NJ<"6.;&:vhLơg`j Zq<e)FO1_Ho8KM@IF<o%%_!o̸E{vGA%)zqF$xu0r8^;ɷ_y0 d %LYDrJC %K [=SNҶb *^2ӗ6T.*uxz iV n _{عVB8dhYJ r=WrMV'UA]nǷܻ~<ߛU8nv(;ob  c^\|.|wLذaLJZk4_;&n$ Ipϒ0܋$a s.Sv?/CDE@r#U}2sRgj6T GMKoR1BpNT٣ 0'g;Gظf!#׈-B0j| ڎ#mHrT> A1H)v̕.n*Glb٩]ӡ(D4h6K/qw9sed.i$ #W*P0c&l`'۩P MY_ ĢXx25S۝E"$)fW)Vaj &LȤKfEfpo'%*$!`:7z,n>]*R‘#U%0AdFfs0k$)i M7MrZ \ݰ|l9ԁ/CDŽlhoN=WmҜ95 9{;ѽݬ(4/zޔS\Q@Ri}7{ԋ" G%2d|bmc[3W&`7g, ѧL8*g4,Z:oV跬K+!c[_1ye$J,p݀"`TD hI@RP <P22`0"4k11Oɇߟ\>xIB <O?;.oG޽?* >ۓ=6h8;:o'R63- INGZ=殰w|N咴9SBcX#BxOy/d!)OF;GєRe&V~ZZ~ZUicm,g`mHgwd4oK_ F'9 M@E9ID]@y&Թ Gn*CvϧF5>ۤܖT7SAN6GџJTZMUBDj6Zz'+}jYMGI2zd SSt/~ J{v*>:N:ZhMyn:~8K[_UfND~~5c"Gefgrek2Xn'iD[.@"`d34 u$XʹҴh KSӢœsm-X- &ȪmmnGf4޸gjůHjUK{NR69c]raF.ʘRM Rێh}!f.|ɐT]4H& Q0;^# cU.y0v;It3?ˤ;?#W7mUqbmEv1+o| !Ƴi֠uޯWO6H6#GZQv֭YHx}Al()hCBŏ\R]lO1p}>J1cR1Ҫ}Wn6 < N01RITES-NE ɹTM:!d(wq?\=9<`Ʃ^ȀJ0Oj0$93"&mƓ?i ~~G.t 3v)@*I8ϫøDܷѐ6ܿ$r/2PS;?pEs=\~߷2OM{m|y#Ӎ4]ըe[/.[jP^^Qم9?;N({7-U߹0ZvԸ7R.n#/rxN|ѭD-&&!İhh|Ox<xzjqz~eyPy*۶8Ť빘#O G"^WZ=X9Kl$0N $jpY[ڌ}X#* ZU"C|։x4JA}PDDF JEt-FHjLQuflx@{`G6 ^@h>aa^ /Z NbhBjJ7Hwg5 ԭԣ0.іy'JN+k> K4wtrP8ktO]by<1YUf(g3(, kKO(ClB(u/ƙ%֦498x%克^ve!Rt.m H18rFP.` $R: L]Tfocs v\(bԸ2RӀ)dLKo=-W< b{LQBi3n%LbAARŨES\pK W5L[&JUǤsx [ (\bk͈ktrC۶%K$JbjmxRiX"9ƒ\jú@AeUvQrwl5a$2M+bMU{ U]oLvqيEs|m0jɀ2Px_`^ E`K]ZS q}3z$q B??;̦֘!}KM1^5Ɇ]ѣ_t/sGI0~IL@=iZTչ1;DL8D7%^ G r,.}aT?{Ƒ ,rvGŀg8Y$K7ITϐԐJ3^:iJs.**{_l ͌k4+ƠhٷBl ']Z!v萒΋0"LުTFv+SRC?Tr|8$7j58UYRxhê< +wixiiٜ^zh\!^ f2s>j|Tr'ym2hahd Jq'h43Z8Z_!j7M,EgM 5\c4;\:5v|ܓﳠ.;G~ixy>/[m2/<]ׇbк z:wa g~o.'}~|鋛_4p%KYBkTo?^ݣqA3.#ɘc23L+9eKj7H)n:ܫ!e0SuҖ[A*IG?[ŚC^ٳs ٧l~}AKf;QV`Oˮu/r rY:V @ReBm3 7T!` 4RJvP @̤u+ؑhgU.zd󵋷^?~Mܓ\֪ot4vkE/X] &9P閣Rl KM矬z%'^Ԛ\- PztiV"~)z)#0 Kϝy' +ߧNnms>~M;DS\Sڣ8o,w+O?i{N'+֩T|snOM=b8hVCЏ_+8M7(j6(kB {g~VN-p&-_VcO |b|*L;yU*VRqpF LI1r~fUܺLܦ,BjyFl!`REdDRn]/9a]x+ş ;߲mлpw1Y5G"i NJ(]lmn2 ^[4DY.:GJE ȩ(i"J*rR+k34SRguţgǪ%x?M,(ua,Rì4Tm΋h0ii08ȳх 8" ~Ȋ^a#WLnI p>Qw~u^]{ńCNv&rGWo.߽FWܫ_'Me!m9 w?Osj[w #9ktDl fLݿ{ y GLE6Ԅ!)B/+}f$H(9CD ctc͘{U4"">E}ϛ b0' aJ{Wzin|/šHBEʲy",v uSk%=!&+ZX785ԑaέ0X"3Ć)+32IIH*Jdt}nv"{FL`R:T~*f?|-ddO|UGx; xy S*Qy ;).?QHR* Wh%S_ې;s\g O-RYMR5"ZYy6wg&$A"Ce`2h tbQ` ,$HhP"$0,aQ%@9f/j }"ewVOZXQ ..p.7f߼鍨DiZ:rX y85q~6nL4kZxhޅ\e$o0^{Hl60IB~+:%Vv(>;a%.YqGxb;rmg4[񁕎qi:S*?чYwC8nHZ&Q)dBQrh& y~wr0ޤLFVNMWmJ[Wb^flz?[콤 j(Ve*13͈4o dx6ep RZk= 1'Xb)Y->x+z8V4 czPtH|ȝ=e̥: .Z3Yy XUr 5 ՋHsD_Lf P?JWoa/m pao]}CyNC//xK޾Y^pUb?u2AwߏRﷻ+X "zW?|jG|*7W[z_glfMF/z8- 6pa_aZZr7]YPHu-HfSt^64D4"Tr"\]mJUȊJ n!BJRO"])>:'+eh >GQLѴJh$gdCSD*%)>:@+7XG.IT^`?e29>R-'u*Uk~3 rnz ηFdM|IkA<fתP"܁h??ΞĀ~x0.nGqm\YDž3tS)i?7 Th>xW`_#RRnoz5#3`|JLJ,{?ȵkvtݴl'_ "$hλ\-( s=ǚcb ܝ(͘43,: * ]=E7!XӆlGb+n,b'(~Ab97R9Y'N܊>Թ[},a=#@Ppn:A-c$ n% Ng'S.pI9ۉ]IԢ\xMmi4iX<[[TL,7* nd"0^3Z_NKKWMp.@& u0_.G`, ])l̞iY:[˸:( Ի|U3ȕ WGOh:\wYyE5zswUcޢf7MXkv©즸̬&f79*p<Ұc3pT{pSuHG4O앹? ʥ Mm+er?/lhnKih1]yzJ(V}jI)\+tvn7nק}$_WZ!Jf(!M:aDS#h&e:;'bRl0~\ZL(yF/aa9v\^HL]k' +i RiHAs.E}|RN5U:BU *Z#vF%k]v6-Bg.I$"K9 +a NP:v`ji⨕''hlhC0 ImK" cF$u DA,Q޵@N# uظJЭZ}M$.YD\LՊF1ԈPYL޴R3Aq*j`&Yt`)`%19!E㢣ڄ"JgUF!5OT`J2Bj"j*Ųk׮uRTPT1zMZmr(R20QRZ:pHf$Ȗpqq^pq,)aʩ$Zs#3CZFɈ`W9_!1=#ܜdڌReYV>NJaxDG nS ̓s9R4800 !frE<۔ NfIA(,F80w_!) n~?X^`$3윿dEO?R$MRkS=9Cpp5% {U]]]-L3m@_2N5rN:oe"G#km";pS ,8 ryIAw#@ l:(xʂ ,sDx[Y"&!ui,RX )yhkp<Ir&LlCw,10AXk  $ @sִ5LZ~;2YQ]tXg1\UF1I?T)kQ3^I΁}aS279,'(N%Ҡh|2 d ;:X TM2,8o`l68f l1`I%3=540@ 4nfmebPDT@M Q!DbVAMN`S+6!S7"@x D``ev"c 79p.ĔTJŚ*̛ANVꨝ|EU X]L((**P Ѐp-Bu*S)u.T_owÂqIKDsK<9Ϙ٣@5RϘFYczl-=8) 8SI> oz'"@k PF"D@5q+|yP.F2XxQPV .bM~eQ>.qرZ>[[=OF>%\N35as=y:̣x4Qu Z+M]fIE :Z˘"QZHĵ@1-΋쓱 >0S*{xհz.r.}7LVe*q,b ,)>&NVbo8mgrC,~mf?nZe=ZE1ZEln"z.-l1r 03`"T*6` \0ǘ!B=V j,e 8 |mђj@)NW*;R$64-7 nU`ѣ#fR5=YQ]IgE^ֳTWJ!_zDA}9AOJ e3hD4& MJ#bfZϮ䄔lwl& Xl@lŶUJ6E2*hwI:#hэ;wxe~9 ӝ| _zsPx*jz"' ;.)3"1.\ 2̍ `,ʪF E" jM83o>XRyA'6`%Ťw|M JbN;^0Eͯ-ӖLU l8^4 #HQ ?fz0%qY'X@>r|,XUr$ecXxuO x5.GnH z: j*U8M_t6j%@s ^cAzzؚln(<_Tgs1:?zP2jߟ\/NW33~EW&~0Wr٤n~5O΀m׃| Rgdt1RJoiܰzJzu͑A@ Oۮ@v.K5w1tp)%d}7P [`^#d  [=!Q,4rp1zfD ճvG*.t,H_d٤ ixLGˇَw _L&~Ԝ'RaĈeT+H.[-Y5S$NmAH\lsd{*1!2`eH H* !g Y@ jZT57)лB@1J%ϓ!\s(ȉ?E)jT_=Loo'nF~b]0,h)̙rZxB3wwɘ9^A߁rAV#g \Nj~XEäӜw}^/-M }< ȁNE{nO;@rl~p`#L1?".:HQ+ H\WQ{MQ 㠏gS)_*S=~\yT@YMيe6;pkQ4ȫ>A^];;l ی/dv5f9ҽA۱@FjX֯[;0 ~N{9UAr9iζqé7dtU?80U/ЃF3;#X@}S9Q~N# hjf{6N {;$NlB` z+ڵ[+SvGVrUs~"S:Vu҄`\xnƀ2Oe}nu,Q!wHYm-@'XAUn_)۫T > C8reoontn_}OҶt} JCcB~әȦY5nSRF=o^K`eV-gw~eǸ=dfcMV2yxhnd0_,` >ͨO1 s ,KjyOf&[;o/@^*+/P /л2ГM.Lh _nD!>7Zrv<h|mYvً6Hpnjfg.@x[oB>\C-6I|hÇ "cRḍNĻ lw`jfl=_at?۾7[܊:xJiFf|/̒hɣ3-$ FiIؘPJ*uKmߚV3.Ϙs(˽VY 5Hæ'f71zS>7Kwk֟nAי~[[̍H$Q#hlLgDe;݂€&cf$frH猐]0BYOkM(eZ3lՄ'jI) i P(B:x^yq% :8ꤸQz]xl،Dh!g1;v Y4Ard\zpPL̦4 D Q\[d Z[f8 ['-^W#$!/صyR埦0<s;-Pc/DiF4ƂB;"[/0>,q7Y>q; +7"I?^.Sp{mvG1'R*IAS{ާ=TwYm)= $3&eP) {(JΥld %tVeI$@Z6ⱦo ol=[9&C`!Fq 5(zQJe 3^6<#B_LZhάn{öaZIǞofQ:X(!QuD#"3"k [c [YI")~q-ϣIr<}$bhwS_Hd)9yH<[Bڤ;Iq3~oJ|5+IQ[ ) @MMd0&{  !dkȨ8 t!# .9B$|ӋoWRU9KF{g*3}y>Y*Qi^v7~ͻHO(Amh'xHl>{*d#-LHQK ˜mBe8QRVS?TA 9z*:rsϞ MQۻtP0 >}_pܿb!ٶβ7hmmNmfE?-_df?(FXly#Z3kY7 t?le!Acx,LX(b@' Yn'}o,JD! a,V{cVba%UnJRG՘Z(=,g{I1I?1HYjH-]Y9;b`ս.*s\kL9]y@/%M'?x#'ҧJ)T{1EQ38C%qV'RFQye)ƶ4ǰLីl`7MKxUqO2Pz.G1n4 -ez MCL 4fLzA(mjO=ic@!ю tRJLJ/+_RxDżI}#w7w,xnH#%oydqOmo&ՏqAo|{qBE5(~~I-&ٽ2ݕ'(:/T_g{n:qg'7V*2.Hw>_ӴI (zӦ*,w !>bB Ts ?]$&~n0:%N*:W'ehfoG&WvrtAi|}t뛬Pq8q|JO% ]IkڝɈ0,8 Ǹ6 XlNģkw\-|LYkb v E;w3@a-5V4%8B$A{ ;9ݛѼģ@@%aPTHq@>Qc=7= ,?Atzl\G`"<!NNF5XoCbfv d7vR3±}+=9T]._#̫(7:DaYoyǐ!V%l rYp?]"rh?+iț-{.Gqo R(YҖw-ܳd]Y-(&UekIi:R@DbDb{ﻛhHF)ed7ϠR PD ޟy*Z ȇnSRD|a'uTH1{Fٞ :ەJ(]vD!5؇Z' `3p"':r1("Rm'@6gB] ǯ6iZmrkHs6R`*iAFSckؠ5MZBB=1|5] d=Qǝ18<Ȼl Av.Tu)O")PX(qVzSP0]!i@nE,6J0N΃$m89H !nτ BbΨo2LkmRw*nmu7$s!,rd82TMDC+tbga}SHS45_`b7a0 y&f..;Gr G*2Z*kj`ֶkPjվ'^}dh(e__ I#\T/d79E1T/+#\(t|XkM(14uh+!I!7B)fFPE@4RZ:}Njɢ<B a8j>ExLKFU}F]5Q @  NLR.]O{u\bnR6`91@ ӃpX]:Cps@+qw4zwJY2rR! 6|]^ .Zd4ѺHzv9ᡀ+Lrݎ#],^utR娋V.j"ݮ 4$) Z2Jz^ȔTb)uurއIf2W4e\D+ܧ/.tn!i[h $wW @R &'`q**Am0YXx`3 NbxtףsNbJ9*`#@{3OL1&\urzk ߪ)6FqAvք;L)K)M^_>^cܒÓDK'ynڮ!T P{%bvw1~zzoV/JU+jD=h׌oj,p9 ,S'X kcKJĥy~v6y@*QakL{OYphĕtN GGNzW6낢(3r,Br$!|&H$Yg,˂qHiU|s;/AXe"N9ID{XH5aJ4aTs6RchdHa<: tĢ8)ɽsJ3R 'yx1!6-DՋuO8ƭ۽qJJć[:΢ 2ߟy@%0uZHb}OYxoyǠ!C~(}?\ܫHOwokMWz6W,^Pė|9~^7{Ώ+/Gƌ5@4n%H#SI K0N+%4pvɑXѷ~6SȩL!|̬qGﳞ>4f٣Ƌ%9g- DZ,uA*IJEcPh?Xa$S܄^LZMZh#h1I@YTlIqH arK)YDԁN^Konn9,l xjAظF AQ\J Wh1yTF2˧9f * JRGʣs"g@8cBiF&IM]oGWaa{{gÎ.D*|r_5҈3>fX"g'UQ#AOzoI$]T>[$ǎ2ʐVzt80؝)A!-.1<D PUB3QWw oگccOdH2TXc(H&1~I%Sλ8Lݚ&~ \z |8khõqʘK+u|u&!hv8Wƪ)$B=/P D9L|d.'^́NR0{j#{b=ѯ)X9qqZ[}:}Z]/~{=跡<DvWv3B&pZw`ǏLqd}\ ԑ57(.>EL+zlٯv"z;K@jOu6R)^$:o 8懽ws},d7$˜*z,97apmT!:x+ }]D O%\Ij]<bqu90 c\w_OG6/=a?nǏFۧ_wnZAJg=.1d"{Z9)X4[kt1 )i#DܥjL]@UMT (9K[?֦0>2׉Q {]Šz_^]ɀP O>x>hR߿'-7$& f)oG~70E "89ҝj/?킺Ɠ~ΚFg C썐\)j&k3saByϛuo Dž+ϣTiYSp`ϊ[jA8^}a ֝7PKx=~1⪂og$NҼ'"T-;0 KK4 |oG4^qGGWepZ?Gϣo5ͦ겻*kC"+aFCÿ QA1ӰBCZie0+ 1#SB\[$Bor"Ty 8KKRp 5B? ZBmʠnǒ#zaۗMmPd7 }N t[ E}+ڍԣWQv2ڇ3L`}V*]J9 99e`1?K)EO]Gۏ g?z3z7:XU}>L!sffς 9>Y`u>^O/rOz @%ɦÛ{'p}@fF,AbrGN68XrE$kC\;oٗo"ar&8o._ϓ"Jmb"s~~3يtow}?jsQLI1T޶MV $ΰVyv&VcZg ^ZqI1sLBuj$gyb?>9̔rZ޲l/ւZ1#O`iyxt(UTz5&')h1"t"co3$Lq83*_pv,f%U_tމ~q!<Ṙo6gM|_`B}W+A j59CK9#ONU@AR]3HcƝ*Ew:zrQL@ć˪YkMIYLtWlug)a-;TRЇ̲>laދ潘aދlyևoxsc&M[*A[̜EUy҂/|p2{'pu%`Ta%1ܚ=u;س] DaCN52BENI +Q|=V-WWjؾwjt~b{5Cs]FQ0Q)Od0y1VB>tXQR̔p2H.5<>R 2JqՒ*:Ń>P0I-108+bHPA4|]괗,_ɆD2`oG_)2h۰Ԝm;o_(8NZ%N߯H&ʜi/pt* 8'939P8NaݧƹYdJĔ߻71]D<ۏ_>{qrpv<'R'!%y·L^VbST= Fj4p|@*N5gYaᑄ`aJ(*[4\<OX1p5'Aȇko?31p;E+ggKN o:4&$2v,!]";umL`"0U4(򤰊InNfqHiǥz#Bx@ 4.)en͖Rp16'Xzoq4^W1o,՛p!i± cJ$ TS)L̤yھJ=m9v !,/$P[Ԫ;nTԲBaꩅ_b$'EzW#)b#iArYNϚ?#dM[PNdڐ $[֌d]t%͔aPɗ4kM1e5c-Alж1BHq8 xYyp' zy 8E"7+j9z Ec˜JHV%˻?srV.Os(IVIHM 527Gz.'?^OŲt仝tr;|'dr;>#eyΰ1ƙU:4#L!^os'=︾LmCf`s&c@iw*¨#vf7F 1nAPv[=̃cw&x7y&k_%oDs@I21{+m OQE;ˮ[fmdMhX܌?zK{ZA&)܏/SBT"wemI 7Y,olayN /%j`""r_feefPgjkPJB(k4 l"Ӝ`>6˅pB*N )L#% }%V ɹs M5㡖8g<~HY.x}wps?I[>S>B&sɇI$p53@y/>ǷWOafTI?'xMn-a[-Q] ĘSX8Ќ"pg0uҡt\xN.`/{X$-_ }13(껿I^p+u[k˗`vJxМ澁0:r!Zeͳ3,>+Jأ<~f]Ԙ! )wcy%z=NB,klmA 3gT!*VK`4GRϾ=L9Qn&5֩JG' {;MX yF$j\ , f)T D<<ۙc[-nOR:r`Y|gskNYLtJ,lٲ)BHN߭bbh3}j3CQ%ݍ9ˌ%>OA|N.;+t)cWNSia9oӏa' 3lT(Z0ea9Džd)oXeT1]`J1Evi¸TcpQ{ȜB brw>\&6=?SPEJ=Z,>j?V):reS#Cpp"T@F mK 40N0$X:N&/ÇQJ2Dj3n)L<)JR'Q:Y5ߏeESy[h~;{TMa3HᅩgGo'f1-Q)Ʋ)X8h q-!qr\5+_sVwYB'΁هk&v ;H1B0 w=~ɚ46E ^PQ 1EӼ( ȵZK^oZ+V.aHX<$+G74T`dc6|ʻ妄°ژ/=8aJ$*FAZv~\TvǻWZoШ 3 ˏكn2Ms_>ܗ)C}P}YMw! AcWq X 3t`(r`*j1N(GM 'At5}Лr1=L@?^ ˬ|h1TZzck!3X#'16,ja3qpC :cԁE/Ji`Cv:*xB @ dC1Icw4bNc5<cQ&Wnu ^NN^xMWr㕳&,x)*|ñ̦v{ ֳ)L=ήa %JaLiHWo bQ&p7ȬT޹⋘~YC J-⦰T) /X^ҊU(+dXm@Yc% ŀFk A: E| K41jZ <;󸟃Ѣ d4M3)12қ.;!7aj?ovI8^̝F6OslXd9'I$k8h9=W .0,u_ \pB7A_bc+M@l5-o\,"X= -ԸBM*}úKu p>DH@x NRaW"t@g-d\~RD5D]\:>»`nT;w R027' ivVex5`$]25ԕ'o8@Ʉ^7=kRrs5~exd;r̜_d]ݨJ@=: ]UCY ȵ( na=3C>Ļ:3 ni+k Xz}ެ7_Fˤ>cNyz7.L _`9e6ZR@|- ?O֘w([ȝ}?(4OTYO +-I F2u#jNgnNje^=j/ڐ.dXԸ#THCC* w &J/;Tw, 7{MpS(nRggqͲ_O=|9iE:ƫ~:J}9abH z\L5u ftߺ7? ?\p6) E*REӪr+D7QH of&sF|+2`fyڅxCFǏVH{3 9sN\̸ۓs.s|s3lHRc2;wn'UXγmpl\;#HBT6 Qt<<΀e1 1FIUKL=h5ERle-_UףUX Nn!ZG5KxB_qP8i.)j &4rP*x-|K7{6j׷ 0`,V++ !JNC]En)Z*\5AHsv LxIhK߽W6x#b$3h TV1[GqiM( 4mlBGeAM|0Ղ+n2-;vHExj%z|{O D5*5o3%w a\&"|U!R/)z^151B0S#sky8y,' C8nBy8B^xM9/sjЦ5N5 cZy X چrm S+[#Q.VVCjrvĿ5TP1&^ $ET~e'W5B{/9A>i#'"cEXHPlD+/g+%4\.W8}u x8\n@H]Wv$+.Yee9zVi C*hck}}}"L.^Qy%0g:e5;/Ȩin%J5џBU}'tytGҽRniA:JKQA3#.ҥ?m^m]6j3vԯ`JHHxG+<1eyJŷ9J蘶&'1JwG,zWw$Q>'.z.?E??rz܃_+ʞH|s)kq6'1]v5(wg/~2}xl_{ ׳uNЌK3dC Z B>SLJʨ3)1b~3{go}z8sK$c.WEsD$Op}ONۗhyG@_t| tg~Qհw& pؗ@ M f?Nw^P6vPjSm/ `T%^$ w הR~>[ z(C^m9/DloH@DT+$\e//fZ:b1?4~Lgh!3Co-d9Ѐjn;ӻf"w'5hδ]rJQ'(iV<: o5pkJjvgp8 5Fsd0 cF3ɘZeKYûcP!0J)@Ԕ*'c֠͠) [b#딵B*}SoDЊjr )QGC[(\.x碒F)B,{${R:d0-:'Hp`l8"mfʹY0ش> 0Y` 2pvY^i3e/EǗeig E*f6R12HQFm }ќ#Kעl}S>..!v:TW+O? &pK U%R V" atyb;_,o*%JU=ks R6k\'Qm|q c:>$sqq0IϯDog2֟a@g}ѓAƪ莱}iwV.Ҟm tDk;=Jt{=]I?]n4JWAe24h S@Vh2j ]`DSU񨺟Sک;Z*FEI,Ι;@N+M%pHBe͋ 'P_ĚΙCuX[ۤ3#A}P޷x"5JKگJa ]z07_UU0;޶s}gAC]˾6bwIe_h=}QJUhNަD52zQTٽ^oT *zӘCڰRW-e" 1ĸ㊥;n]RI (S P:%'-6&w?YJq\Jc%^$JNENI֘XCDέtK J r Vl'ȉkB(ן}5(gdLћ<21~p9v<ҩE%M~֯NQk\pd7: 㾰ں1p9쒈a8{&R0<~??ٲ\x %׌krMN o\>- po~}ݕëd {fcڮX-0pᄗĨ!.|O0pvGJփ *krK1I* 58<6$BIZhq-q"7Pn0T &5ȥ@q]3CSj !WT) W8 ^Ǩ*Jͨjs[zd1­i  =!⤰&(:AUB#rZoF k83˹@$FNo ٣v`>'Kr%n\z !8ݾSњ5PqǠt?I%QCN~hFWźz& V|pe󌳗V TTX  x dAeA'ӲZDa^R@9ߍmTSj¢hrYp>! F60}!^xqKaFY.rݣ_cvo?BźMXP_8#m0&UZ+_QPq]OK-N0xE횅,D)n}+jKELQP7 P>vkA4v;ݎLH~}k38 8ighWB-щ ]SFK*->:I,gLZ&*8 YBSi}ym<{mq^N'xkPf48%ӝa{?YS`og?BDpkGN̺wᆷ֑djCKdD2յZvxk6F6n}TY[ rӕ#CмV+VꟵVԾ FIzn5#V1;J9KkU+Ϭ&Nb\a$sL3aYaP޴c!4F;+ӉkP}HpPVL1C}5l`IJU|% B+G(B3el,}4Z.fTSXiAuydt{ol\wE<)ًxR(=]6V i.1~c+HwRg6à-*x:1JCxs^,hg Odp|m]!h'ܭ:L&\1G!Wu<~iFthn+S-i DUىKLHHLaMg]p`!WE.ٻhG<,S縸* KoYCi)eKIn#tfJEIcL:}{~Q##ip#=7;o$ ?,^Ęŋxc/16.+jф0P+\! *r'B Eak|_>OEbQݎG,GOcfm- >G+Wx~5ڝLJ:'$e(TcDoV3HI!:J- x-xLPIy#򮶐ڔVĴΪ3N,x z=m͜ivOiacJ/.N>oJl7KK*AVŶ|djn&BM=Eپ 5~T4-͟4 ;+Nҝ$qsH+y8e8O.UX]Jz8 RFpVc4vzY*dnzs*7 QV5"cb#㰕u(4pZTKҎ64v /+luԾJk (D]T. g AXn _婌PFyN`1RzX-2(d t1sǢzZ 8G_/y-Ss \@lC]TB£E\BCO.[/lq]BOxyvR;r:,fa e߮3 8 P*A+7ź56ZDկ}-VQѬznNns(-=~s5Y;٣)V2Jl]l^ZFu խ:w7(;j+ eTkZ+93]*0TX{s_ȏjz{to+zb9weS}qSk !a3Nn;|~=ܷU,ZL!Api0tRsx }4K?4(:ۛ0}Ξ1kUS%\|\~6j_fK/ >jk v RX^Dd(+>n5vA gh Et&X0LL"#X"6͑Ԇ*)fIAlssBUU{clI}0͏^] 5R/Obf|y}mlOg4tZu2lDH LnTH}258ޟq(A֪g ` /zxĩU{]Cj[%iar>ܑ`mdo\*2v{UU #Yw?L{ "F4HA_&mCl:lpvr߰ߐGL$K iM Ϩ,vޞj. B Z. ' * I@5WK<;O^^I NÉ2' 28S9-х7ZygB\!Yo K c.T@:UkĐ̇7 z>jAv_tҔ#ؿ+J/DS7-}W#N+N *䭩>1Ļ]K|j0hطud.x"<@7*8~ʨJDһFl{Z?ݭTk}ЙwfsLÅVp*AwGHxN8*бam4\wʠwybL Rl2C;\l΁(;@*ђf 1Kc Ʌ҂ސ£HNVEA0t8JN pz aU Jw5Zv\,&5'Ğʞ*X75)(eaz$A &PKAޠ\ Kla"N:)l9&ZǷ[>)qyуqfA[3.%;eʝ-W1|87"3ã2P >KZHxImJs+,L9m7dz[~IPiny 2 &b8dl cʳs?rϔQs]Ү a|>^$8c?}Z{%t$BX!\>|ym5-zI]&%A"d(nwZU+!~v>}Ou{?7_m?~$g*uB6]|j?:\'wT٧gp>z4/Fk,Ip&Z߿PڪAՔ.חrr%TyR~Xh8T)?8h~Ff=l|X8vʕ::_ƶvc ]t>kMV5e_iW&HֶoU AdCx'B8I,  *9!s Fe[xۆbeB7wšC޴Xr;Ϥ˲$q|oi)gynHc*.\qQ;}cD4֕ "5CvЩ!%-hGfm+uP+pPG\J<师;Nctτ**#Nyϴcٖ>7KTÂ^zCAWݻ xaJ^+)4n w}MJ>\깩ehՅ5Y3ٰa894rbEմ̶FRNnnV"IREO5!kun`"*[ M:X-b!UpuSoW'|[=H)_X.΀q SLQ嶺|FqCE{ǥ# d;;aD62Qʌ0a42{YYIo-Y@ > IW b#1^/񶕡>~6_Ek0zk 1T4&܄bPLM(q%"ͩ c%5 GKhNC Q9Js;n@Uqk_5O]U!(_VrAA˔Rx,G i0ܠkO&P$|@=hIO'.բ)yq=39ɩsuҚm>4 s{tgww[>F?LE甲:FnѶLh~ݳuLW?o6\Ċ|h1-th-A㾋'YyK(V\cRN qDqC4 TQ"H'Nj%> *CM(8ĤOHt X |dA^Yr"9gD])߄ *J(6VxJWMB3|q ʄ|T{cVb5N9-30`ד*;jTkJ R C}o2Sp"8'|"8'`;GHMtFE 5@sῨ5`""`:E"wŹPvPv@xL%Oa$Oa}ߖ <候ef^X(R@ C'}A8ځv𽕜XLqB֯EwL=lL$lLƤ}tNy1!}M7$l 8A 2 hN^0,*H]*zǏ(lWŠT7F 4坊tv&zKQ褅lbC/#12he4EרNyU؀E-~o2ɬ@(^"QR쐂h g;Tт̣e" Wq: x h)9t+>,'#)uh/Q5g0?gV#{r6>0!}8*Bsfw_\^>ԾiEηˇ-XWqXj2@[MMA&1@ bmAfUC?εڂ^ h c~ Q}v9EiRu=nՌ,eKWҡ9A۰j0$u).pR KKQpo8(o&n?ߦj`LrɴeCG,:8nx)La#-{ϒPN"_2JhVg}:a٧i_L'hh0hEwCSoGl͝]z*;I|1>8CsI!DWλ=7ukG4BY""U1Oboм' ٘)B@OA]b kmqoqh/鐔YC7zts מ5' sgTVFKNr˅+bzyTt PwZ0' %I3OW.Rt7q|bX]?hHkinȭB EhvN˞"#g5B^Np6VNp=,*E'VsMTZ4 }D@o%9'Np;Aʡn-*R[OfIǠa&ձע:R>rG|X.ƱBd.j<gd]_pbJvu ^1I J%v㦗W8hՋ)ʙ\V?թ2 C'.k޳s;5^d(EtYv;+fZ6T,`lulp%I"Upi jdSQ؏hq0UcZ!L("j(jũLEJ ǓC8.V?FcLwUC# B}X_ 䟏)Uqy ;`G]#agYGS(t Xc&1D `mp(mQD9I |,>*^o),pzMI0fc߮.w Ɨ8d@qȥ+~%lzC?В=I@CziL?^8^}i {G A[@5M%j8q]k:c@2Ƙ!70_\1ʸM@_zu_s'mY$?>g%ezgx ,1G;}s=(ȿ?7?כbppsc. eǫձC:[f| ~~n1ƭl32jǃh6skf.sX)'/5sӇS>8K7SQ镣+= |Y0SLO_1ph}i+6Z=ObF<4>,ygf1-"TG<(ڃ:bm8iW e1 ,+$ s4< ;IYkVe7~U9'!ߟݿL ǟNj ︼H%{WV~|sX\;Es/vn4z㑲IEs1/z^J{9C.Ь̳bxb' hL*z({QbP":ctnwy*[}nhvCB"Z%S[&Pθ|/vϕǖb} /6~%p*5 vJ St6}Ͽ?ڶ˛7#8i%(yT6\E\b7Kmq^7S2^Λ9GEN~_Qxuo`caļP + yNVTo^\HK\ Xc6Q >F}։y̋F]9IE vC>Ejn>V4D_:;o\|f,=}}Pٜ.UcFӻf7*0̮)f% _I=DFj2R招4T$dꁒϪhpXг# c'CdϜBl=f)q8u+bZB2r8^fp3eU;3_{d@%G!%hy,1#41>Ҹmr ҫ5Ys@Hr3[ݛx*] S>*ޮ&/zgvjbNjsK>/x _jAݛT uz _ꎬ|_djg弴~kfs}ًym0="%S%U=zdnMOeV+zr3/{{}yuzODlp07 wgZΆkE wbLƘ&egiT:mv]Ik}CӦxͽK{:xHH#"H5[vߢvZ2YY%hocĐЍv*EvkÝS;k837WrrMo c.0xP08NQ\-m|fb6V-bxC t5roO o W!fEBT"}|9BY}{LP qy5@hܛOP&CQ2RRM0,#)_lFjceyjrrme'' -. NvjpVrbHPTpMLF <8 7KJա@t+Wu&v]X-JNcW-u1cpX GJ(A$ 9m=s˙8$9;făÎ+򻴹^.5TVa1LEh_wM#yԍ!5wzCы΁{#%"Rܢ4ńL ,<1[firP(|ݏTSwLP g}yv-j+2ohsH!ӱ!E QMG^oUSJ^":RhzxzIR"4D* V/@6ܐ hHշ_Q%iT=*,3F;CL*Ca"Nq&P+U IJz4F'`# 2fB ]APhj `TCJ4D}Ī㓲kOhwvb5U+߮,B{cpzDU(EJj3o쭆ϐhZq6|^>G9NHB.6=9ۅCWnT $U+` .Pt=wX9`7Z)~iĉjY` 52 ,ճ:& ЇK߽+NĘ?}pe3OP5oz|an).쩷"k"߮*{͇0idKG ^FJ.{Npm. ۚd>e1C.pܭH[,Q?H!Ơ{EnV#Wu":{@>P]l#`Ba{(d,C6F@A?:h5 0BP#<ƈtkqkyxC!(q6+y%j+{# KzV?ع)ll=}؋'޶wgښ_Qe^82U~Hxd+S;9) v%#ɞLR PI$Ho*h8fXr{_ŶjAڄΆP4BeS|8Rx!afttHɈ" D-7ѭg5  68 6$IaTJ>*xW+ qs|yaBg C/֭[㧛+nw8}X*3 x$#˞w)fZy!XxI^%rՉBof3֎0*DU;;be&dp*d"֤;##92KnkC`& ,bPVhʗ:໑H4딿}imK$f* :pbW 2š4~ `Ti S-tCjrBes&U1=D1*Q.4w!:%M>nнd@* }G6:k)Yb<[;W:entw鳽Zs "JGD=w  _Iӷmi|ഥ~Ԗ!B#v)V#BT-[H: zu(*AajOKp;+rb+B}]$ E0-WxHuyyGvo bS0&* "P`o:e/*qcP$c!X ğ!j^$0Мa)>-2H4-]A0:?b"N;]Gn* ֳccq|$p :a3C%#{θCކ9 8 cɻd}$pVFH8T- ʜ06M_79~N!n׫b>f;{Ժ2?m ɟᛏ&.dGbZG[@6{ȶiIGB3sSt{o  4I I/<QDUZ#^Ί6}y77f t<5n8MYb4Y+ƸD,j<dI x@wyar*%#E˂oS3A M"/?~R (%oL)ypŀ&&>|{?G}J^0ɳ4擗n!@D4Or-NmQhLӤS.p!3 "iJU&E`P򌪜kʔL*b[k"6OfCXW=kϜ12w݇B.ia59 $cJ0IY*r)E* 鳚! $)ݖ @JE8üЀ*Cj1!l$)D]s s%Cϱ)oO>V2iܭg'Ǐ8|>l  .z251mY$8EfF2- NHY6;Bo޹ <|z}J 2б_2t!~zU ȶWĦL(3 `~C%LSI6!Jf qy-Ra&!i%ݥ1%]$f2ZiQO g I3"'l^ҮOD1F/s.>U(I;"y#r"=5^}NA #Z z !w[>?߈xu;FPZ*2z+L"9#)jD¡Erjeg9yUn^G1ă;B`BcD<[VClk!ĈE {~7YMF/axG݌naQưhc oD7=d =.KR^ճV|yp^Գv{F8f׹pF&b}cҹD=U8rȣ}D04HZ& 1bL7i@OB%hN:dйgw L8kjt1*:LkSWM| &M-oVsL2Z`Mbuޮvmk] %br 4 g)f"9@q[1̀H),%*LiX -j4 Un{&/z;<&ļtKRo3u'̢YzlWOX f^3swnf+cJe. `:RE2j@ E!v$+s $ 2%(ǴHs ۓ HD("Ⱦ[<[3b$.KPVz6[4sI&QwDugW!N|-D|3BGwȊnF㳏?&dh%*#K52$ BP189ۏS\!Qh6?0ϑiKDQ$}&׭stG q2~9bYt͏I4]-|OTL_n.9tcR)`bO~y&UK-isl}2Uzb^3S?7 k((ɈQ"3{ 4b t|4{{qx .0*T3Xg ,3/1 ;KE*Yz !`0?nXPrv 9^+l/던w;w^.=PANNg{%s`Wg dhϹ^Ï3s[(z&QwK]&_ܳK1=ɣ&vFl9N>9v>̱GG#(J~.&Glk1 h/{ y TQJZL~y?CTS4Brg;ӵLx(sG-Z{4 M;kxZ#-ĐgFDqB4b5{nDtl!\3իlΖwftt*[̹x'D[ȨhhQ{Qܨ:cu42>jG$Ə34vabht+h !ȽZm9nV&,{w+<0 ֲW!RԽJD΋#Q(&$B|Npb܍`۽<򚇡5/ Oo:˅]egJ"7`Jʰ䈑 Bk4-2@1f/;O+ \ RD)g$a`Q)\悧)F_U>}S@E. Cu CFELd$1L!}z*f_}?[h|tR̖mP^NoaR?myZ)3wOo rDG,ϟ,. xÀ0ƞ4T}Э+ !pڃ4.U,Y֧jqkd,: WCX{XPlLм ]E=Yf_CXw" +P㹢=y/hrҩk0@T2_f?+%:(qXYg!KW8i"kT|}Ћ w>}f^f7  JDv _q@C"_5|UVP2Tsip].  fQ]f6=[t4xqH!_)&P8CAwY1S,19#ǐp +w˯.b v"c A/ΑH8WC-6 EWZ:pҟuNx0o|K!b @)Ȁ$,Z=ͪ䝍a&NGc˻.9 Ɏ;)!T2zҍڷ.;lwzEXXudd?X$lףyFϋ J)}%F[Tq|^}b{(CЈ^dM Ѐf8ݙ^_gKVV /#߬7EP*҄ 4ib 1eUה:c>l~jn$pAE<&/k1~ $DY/Ny{WƑ OiCzؕ4 ),/+`C'bd ظ"-̪,of׉~ ioG\wiyU$ wqEE _J4P'g$!'@7oAFX$7|q.1 `=1*/<3B?"="ѣbYJrI'K- nJ_6KNWj0] I7%'3Wx|\B5$^QY/ISs܆Z.{5=&7 T^ utJ=^tz&!5zǰn]7/׷f8/mE1._KUbB[`^nM'n툗⾏J88b8}L`Ӣ3ǽ/@sI +e* 0"J@hYEm:3 WD4Zd|x %a:6"f1U J'E-(á:qz@(-^.UVGvG}~;g|UhWNc=V +=9R9RΉ{ť" & E0͍q:E$' Cip8B"9L*/m(Au!YLu!]_q=`b.)X}m }/8YSUϘ23L9ߙJ~)]?`/1̇ Bٖ_t\.FaQIacyP(xۓrb?)lO@wgs~쎧0,hz`Q]-' m[a~ۛNE]|Sb |+ .KR$ mc!'9cHMJy%UHK#!5 Dh7&F$N% ۿ@E9IOmk9f] 4I\*Jt A̲ﰁh1# A!-!xp.FP%p> cRHxQNU9sң[{H ^+ak^2ܹ\`RjBBhzOJ8+EݟO?TFZ_)s>Gf: 4F2:`ƼDHٺ7D[GDZ󝒳`Ո7\ηfT:0 ZzcɈӒ0>NCQ0̲)յE 7&1_nYc`LHowini O; "+`}M.`l*MuiƩdTGl; qz8ʭfpǘ^ID)\deV`98d uTclU1܍G;m}#)ֽĆ|.VC[6?Q)nJ"YUQ*0zbU`dyܴC=.ӈqIbN@'ÓaY\Ce۝?)/ԴUTYҚKfW &a}]\ʞS5~E$]$ҧT@v|3MOweUD6|9 wN;4cm V4w6Y.H& ם bJ(JOv֘3)sJ-xlQ8XMkYe9Mߕ6V)vwse'5>do}Ef 8PluF-z=!ŰT9o'9/[d,.^"h4[4R3j݉Êϯ=4\K$YkYvyzpRTKEtY-u!ίcXi@[XO` ¯8Uox5͆wVM+ɻ5˷=G&N';4=%`Ѷnј1 04:tjrp{(&׉I^FQEGZGfj G3,E}Q!fN)iqFG#V: 㦁1 T=W4ֱXJ51P0,wJDcwˀW嘀b\vlu)$]I>zABw[jmT< dȩsR*<["MR5l"lS&l.`r=Ù I =Czj>7JY@#Fk(ɜ1?S l?FZto.|);F`E%ip+n]>CD#l0 RPN7 w1=4BDd9?~|d:[a~{;{G Xݽ L04߻ͧro S)4Q#8xa7{XQB5e"Nש=N( Dc.` {7X;tE?x,g8a,`B)B+P-(x16"IZ}w 'oE7 36&LC"?° ie 07tL/0T/ RM~XUyPJvtϧv;NHZbΜ-A9VYBY_X?%?o6$ m.7Kma^\-+P71}\qN`+T!X$=뽍H>O=ftF.'xXyaEZt s3TʍvG!OϐjiQb5=Yl{xȊx;2I:3 \USnL3\X/fj棊OV`sƭ#[hph4asAֺb N0ʐ3­:7{y(- |\d|x=R׿[?e+"vhU(H5MH}:i}zrBϠo8o7q;`Ťm_-4s6b !YW3fϴ@6橗ZƳݾ@ `Sz_ +mI-)Yl`nd듡'jϝE\/wm4 ]#Dڃy^ؿ،{otLjsD&qP*BՓr״-~I$&$ӑ ՚K@k+}1M% ҝ)W=DI/Ɖ04GP?Hl/Kg|jXb>bl_]=YG,`|g3Ȉ |uW/Uи-ˋ0u|w@Vn70MB_%+C%3# 2G.^'DKxs^"Is96X87Us6=A`2cr1xf YGD8.dZgc;ŬaJ}R:C+Ё\;K.^CF8cM3>DX.M&漥dq&IXszқ[ЛƓϋGj؋IKDO՗|ru;4WO_ő&嵿Z}_ayiCKY~-JT?MƣL(a2Afw6VQ,%zk?Y1VL!88E.ٸ&x(4緽|c~ )x}#7Ky٤bP 榽^,FEƎǐ3w3ɭg6T3!"p|5M^i\&4u{TvBz] \`2Ư֌k̕s.a9{׭ȕ["ZcBZk* b;l$N%Ǭ߿G ÆE !B)M" VUg  A^kJ5Zx9$ ) [e3 ,hc \0 Q1DT=P ΅݁'R0Z2!F9 7 Vr´AZ c\j熃G=6!Ҝ4MBc1!g#\~'1x'1cWlv[d`4{*9Zhldv<PխJ]FV=,CV䞑lI4`֠F mw :LA#LuzpZ*2ݰìle(DJ=5, ZgO0e.gZ;ȧv3l9'0:NGGJs?]mo+,w< Ebػ v L}ؾE#K3yU6hck4s<$<>qrjTNgc)A RFU}#jA d{nXoy>P2@"g靣Gj=V ~b"OCGy/~hkz( ","%1#5"ɿMf(~=ټ -i7”f6%-ʎtBn ^a除q‰=-N_N2LxU=qO՞tvӶ^9@\ˇ2[GBJӕ.b-nM]6-n߇6 {oaQ̮o+_Z;F[AzMXY:ܼW q"ɲOExӮN8]՛k?̖w!]$eKW`_:'xrQ9.ڪOUR맺=^݈[[ bNgnNlvk٭r]Ešu? ><˓q`v|sٚ!4H2nwE$a0ka,W[z6Ҩ.9~A˵\alOQ#ǔ{vJ͎cF-i-3!`$! fZ|X(bq)J*be߱ hI:2J*$5^U9{OУTT74;ꍆx棳R{SdKZA]|[uFyeɄ|vs(Фg&~dnMcL2(QI9΅G/7G%;lۦWN+!QJ#DCgZ^i}_8|Mfe-wB+c~~ih>)tO}v͜@&,Nrg#9eFgҙ 6:VhdFPo=K_?_އe¶z=[3em}:݄ͫL Bp 'Fՠ-p[I|I>/kc=r.{X,CdpGzߵ;wehI>g9dΥF"Zhd"a%EeEN"]%qQ-tOS^cA0֊L(rZhLqCsz!Yjth` X%CЍO"VWZw{ <ֳbBm_vklfDFJmD (81B 4:IɫY=(%4JAF/K}:#S:/y]~VKdwB< rZ+!KJ@dvMAJN n ;ܵ[Ҵ^~r _ C=W~~*S~\,S)Q8}uLÝ<,o/>.Y[:"[Մ \^-pU7. MEŏ"5ŽsR!(ỂPΜLHJ k080Xm]PlR֔ZPy̧\i)ٔ֝ K/BRA'o0ngh) Hw=nռ{P+u^8"zn37\;w-pdz/\G 0@ἇ[-8]^Yth `>\H*S֛ SoY0{xo\\@N9x.a"-D=6?:߰z5,f ޡ`vP )3p>I/$ U̙F;0QB 7 U[Q闝p$Y])#*>V#G ߗ6%'DӜGQ_GQ,5<J:Y99RC g%EѲU∹@cJVD::-پp-P?/{cE_Oo U]*L x^%"&C{@k:/v{ $mݗwdijJPR*r|W\ H k@G=űtzP(P Z'JCΡ\h"|iZ'|KBbsd6Wd8&o;\ 5V՟OR xQjFσVO~_ۅBa,/Aa5*~"5jǎJ+5NRT,/nF?)ip ܺU^uqo j$A\LH( j*Z|Q~ HHgOUSuw2BDE1ELi{WI{Gk9XQ]Ėd|z^ ))4EJpS$ze_oE)8Ya\J_nM? luab4,WQdk5@"E|lءV ?NxD)7+Fr)X(TxPE2.X)0:$:(9LƒhIZEؒBr2%w ItpjlBM.ʉu&@Mz<9b"I璹n aDCe Mk(r[/"zl8HkX ՜hѭTݺ3~OJYCq5< JsN\&D`h%Cu iX&I\ج \Z:iʭ0$ ,!A  JǑUτPəPCcLr>q#ܲBy9O)%\,õ(ҌhMjxjM&/JIE.ڏ䦉zA@ފwgRTp#ouAP}SVV6"@NziTTJU OE{eV#7Ei4F g"uyv` {͚ QU\-q3@FݮCmvK*Qz.faX(6\?@E]d7v5.C !  Ġ7ؗP2^(.v89&"$zc+7>WHk")Abx qB)++BSF$0@%׎ia0 )HJ!uS(+1N-+s?S6|ZyVȔ{{u]:}}xX"K>xÃYw_T./oΖt~}ssBU^rc><,Wן\O3#uEv~75X?>\_냁uy&|wd(}˛P xɤ_-o?D ǚ"gD' @q]ucTw_G6^_Њqct13$P' PP0 ie倒fůjEX%+>t-%%Ec}sf\ϋ7]eȬd.ܶ8wncgXd%Efd3(ieVeDK1΅v2Bվ݊S,e }MJ HќN8Jnu0U)c} ANc79 0#mRoьT.b6ЀJ _q/-2IKti$_|\uu#Nj}pkYaK[9,흷wy{{W'UZKr&jZ59o%hJB!ZPVV"'Fb?~5ܿ8~:ʮFsr5 my,VNm_4 9!M7h}:Y,^v#ؐ)x9k/5J W{yZXgJ0 Ɩj9EfѭvT EEdIeA!FWwXbpCg*!-p0䍛&jVH!QjS@UjTLr[1p*-%]*]t]@+NEv*D%ŠgINQr{jpBK:Exvtܙy%NvLe97{\@W$LP_(4wKq)aޮu~qG{ᆹ<8mjU!inoaXWosunwJ)BhV$Nh:F+"0]Yp)G{xvEX>">}rF@w[sr.Eb qN8-wWן~)=ޔ p}]!(*kKgxk ݶETJyLNqA2y2 먣Ft;Ąa"-KJȓJ#FJԲsp Br6޳q)>Gz¸_שrYYZjT 6 zKhp?9 %I^oYhΒXWo.>u_Y+varͺ_u_ ~^g|.e]xዸ/bOnm],әNNf\+ˆYN?yUyە@z.ά˚*=Yȉh{*8.wG铻cdi)ؑ#7' ;w$rnIaD @7#F؄ ($R&[PɔUAQB%ЬRQqdȣ;\)-*åʔtBU+ ? 41?(ԀgatNrHNn8%&ꧽ@&'8 w/imN9 le?wT~:NrAqQ uv~ۆjl򄽍m^^;+ B[̩?٧}CQ^J,\'*t% }X9VJ2t,n j=lR?+_T7Ge}?m-r h 0STP̷-4Hm9q"ڜRsPM3Yc1itqrӡs^X>"Xt`@D ózW0{(p`tz zʧS8 Tp{=B`ጊRhL#$Sm21n<3W*%++#"15^I*/$*,h% !r ir$4XgϾMұBΰ*32UYbRyH*K0e%lp (@4R[(qU=UY]۬W+S( TJU1YQ̉X T`Мkj;H- uj'u VT%s"rJ *pR'eh%r2ȉ 5E)i'uh[Ib)=Sxp$.-| ˥҅X) 5gضHsĉvGNR:hwBND;ٔwD:FwogEC[r&ަJQ_oQb[ _ Ũ)" 0nq/cۏ_! srޮbmk{#XXA,M b/C{~cbbr8wq~o%O) TOE>=2eLQex]{ c Ùun֞0R41| Li<|be $|4ؘEQQO?]MOwgԐPcl_Iן]d;NY'5ٷ#HPLIAHT(IIJ$=R[Z9$c #¡;"9 zt`PpiZ7*Ie :tH:Eb"1W1eۋQ54nŨݗXBP*zbm+vl 5aim[>7 X;9k,;ջQ&+ɭֿ}y<\뤗~}DW֙2rwnjä^"LN/G]03#` !1 `bea^ثŇ[c Xib}dQE[ԻE=>e aUTn9G]Cy}_a}MxVD_up]x$[j0y$[ۄ`eW\c]*u yBSWGEStC q!ri5's¬.*QUqJPɭg7TEKH#5F(qh0-MT&(HRmI.n3n @beh):+:d Z 0IYVQs RftE-49sJQPdJ Z@v[@ԕ$Jl&BQWL"( eMA9(:_[??EIvBKΰ=40<=F."жpSRn;!pBSgڥpRw\4Bw+tۭq\7t0}r˙QϓWZ2Dv7 s^ʅTPPnA[6F[Rr坵n !t.3FvȾO-V eF߻·eirMgCm{\#ls{0IޫK<^o0i)^ŊJ+}r2UU5}p 5OjbV; aIP3S۽̠<5P59kHt殈8%SWBZ\u|J `ޒ Uje'~;u*Gm>/zQ(8BEN'"+ TJNv!ӨDC>O}=FHj7<]K^P!FLQ)d?xS}|&EXaH5*22BJMsS0kcEp7| =?\=Jr <5%,/!~d+>#G}UQri-D]:3í{KK\\㢞l!I1W {ACE Z vF&iVWi<}:kyvzJQ%)k.م%W#2Gą=+!U71oDw \.xkJ,ԜfL)Ќ#%$^^Eӽ9؇JAjWtMXnـMk%H2U((4lz,s=y7/[0p|cOy?電.1fdr E Ղ"@Tqn=Ewc9s_3 BjY*cwɯV}^73bzPKĝCMB3;nbK:i-zDmKR3'dC^bΐ!/~1|>柾Rq+t-2)6{~ >4oꌸ돇^zwϺxzUAو,5CvW_5 'Azha+J$ _ّCQ=R*|~-{1ʰx +'jx8rɤ_硍ț_oep d w>Jᤕ-DJuq.%w&N_kJ$ 6}'b.-f.-N4s`ΩnZ Sf ~Q8* # \BrC+BPhF9Ո<23V +9=uTu} gIwE Q#RK!<|1fqKsJ z?9/s u/ޒ j烥#kt(7 .84y y'8݀ xnwN#@1 iA/(N BA d,P(Gȩ /bOpGX/uP/? :p?_LE*[BJTyY&  +1 ê*bP[P?Zç)QC6CJ0"DCٻ6$W|û*'Iv8vv;uؠ@.JvLoV$Y>А(&4ʣLUd4.]b<6'>5P}#8|8/(?RdYpͫj 5\ mDLb9AR`+:XE1ΆDdЊ86^DнUܱ.s2iH& bPfx䈹f?d~ű=}|UL|$sf= pܡ3J=8+@w$ 䱀QC[%i!EDB^VLh_F: yiB3NSŔ!C\f;ńt0aDDQ6B&?xmPHK~ LV{8b_i2؅Z]^Q9^N /E/BDV-2;ٗi'2d_NonƄs AuiicpQYs-J9JB5l'S?~Qӎ^>,n>D[CG %o*T:2Y\&qK!CŖwm+>mN}ZIb`Q'HE`Em :VEIQGfv i&-ܧ]*\VhR:% &)Ƽ` 6h``A*+͵Uux"8Y…XKQ7 &c@ V^Uc!PIP:RWE.o2S"$OhB{tw5AkHPIչܰFFS~ C")KH ali%C)C4S#@PD(;lT* (~"3EL#{ \\=SᴽSЖjfAL%(hۧݔ i.ð)PقH'vZLГi1+j>V˙DK <.W>ͨMsEש~R׼o+5mxnS7@z9L3;/;D@%3>Z`gooLwk"̖} nɎG+<3ԗh=tUw'[{x;|{<3ҲJ}XfMo/-,>O\xoWoMn97]Kfr~wMSJ¬^&VuΟbrr 5MŭM m=zm7 5*[~© IB`3 N1y6+5D `L+u˚g>W{ݲcATF >5Q'VeIzJ:hqBњ;2Jst9-}tMȣ #b-0ƖЍxS)x4޴2U fgG\v7w~4Y%r{ kog7v$MXR:5o¼| x<ݒDY.}2 Yj^od[ӿ ˛#ҥć4t|+Nq k;9&f]ӘΙxKI89օ ^@E^; DJj:%2Ė)3j=ދ \`rujEڻ$`x Zڨ&%8% 0cѳ2ԀjNZ )e!v:>8ךKMdJX'Tp(0!V/FsѲ!VXnzϧ26!)5 لNIDB.B* UÈ!e6&# Щ~8`h[Ttpq7? y<͏̼1'4Cǟ)?VI1óq "KJvl1:MWyS8XpF)ZZjRuzk(;: 9h.-q0B=|Uq"' *jBpDR]5f\jMN5OtFx0`(6SP.Q@ I5)]Ğ:1de_̀ 'dO=WɛC~ZI.Ai=NO@=wWq z^Q5'1Ae]P "vǧˋx 8H ˖ZH4 rGSɹAbyo"6A:vr%p&;&1b6 1wq,\/ =1%_{'M6,<iZŃ6QOXeCWn1bmQ9fɶ pv2_[ 됺絁 hhF#͆jB@ !MCj bͮ?8 W+;; )ꉮQ"JS*c<(phHqGQPJǂVкXh0-R_mE0ڗ}<ܦ'mvAqkwPQ cIlo`憝IøOKkq7kȴTp#3(.R9X,䐞i40>A\[Bv\hT!PeR~ #ڡ˓ǵ Lc-B갯TAK@P`Kdme7ZVⰟ>:AYɜgiY"@$,\.ULNP3.b# ˧".,;7"bhŖwC"I:w+ tJĻp:M$nTޭ MM1/xi& DWi"M̍m>zcOOVqsnE6:sAa.F{*΃vvw7p]'ō/م^V/ϛqB(g]vN@P6!ߡ .ǣ,͔B.YV<$C)| |d£t(87^( M|爞VXlBBO*;DO>{ 0%Ε ֫D៨kKMx& sNM9Y=V^i D)AXnH?-;XLx{Tx '@a{]w`ΩSRT2XW”)bnADGr?CCG![o=8JM2Mٗ}Z "z^Ԅt4UV߇ Fت #8xwaj R=8}w;>ͬ5@$HDZI}=nxeN0JCBn6^i]U24"Q^k56Ptܴ @奦9wӴHs3ŅAVgŴS(Ayqrcȭ@BD< #~$bcVY^ea\r ?CsH (H*k56X@*YO=GU\bbYw 1='Pvh#[ 7 +]iA1IB_&[1ĥ$a#6;c1D FC UDZFs)R ,ySlJIYA@Rvp{kUqi`X s_ 걓}{= y|%-'D cl2;H Jv5k&(Fc+~Qݨٯnq9{r[~7W_f֖,,|厠ۗt;Lc~TÅآ:p " MEE [JzFxh侏,JYrnyz)~'6`sRΞg2a }{Eݏ[9 |X&prXZ൞l'HCZ':u=} ]==rOA6`m@w|DPeuZy:WK<{~$':eSo'r8S>kLHcHx4 Rܼ~`&l4t|%}68/h>}3􋋱%ҏ+[`@Y)gNda'ϯp7au$h\ǩ|QOT6}s+b+`卒Xi[ށNU^Xzb9|"Ɯe\5vW1Z"rվCa-aO=; Nሲ_q[X-Pܣ\w(d ~ŸJCL4zu7-/.e]hz}YGXL^dN3,h.ٺtCAQBM!ۘAVIFF?w\?q|Ln@je*== 1(}6JQCXd֨~:,pV馲}x![ og%`Ce1Îi`V"nXj&Sk8H=we @+wE&7iiG>B-IC`ghσVI68Ikƣ&pbB~F”gb>R,r2filґbiBڭUM%Y'Vʞry_U PފctqluU!Po֒*s+29N݁F*L[RdqE?i$@,)t[&Ӻ23z^9k9w2s*8T 4yjN)+''WjA+l=C?̚ سݭsхM ~ЊlD4g$Ixා>',%oF0<#+do0Z7X9 VwSֽ Vs7Y ԶPĸ2ҮY:0YjkQiy(#iyލ7ȶ?gZk p& 4͜v8`tH^+\${!'vh=s纍=z(LQ&":;W Ey;SVfI$[Xk$ye舝~NlTw7Pc{Kߧ %z9[y*nTL)lp"PTȓڌ:G9+}s:^&!ԁbZhÆQZ/°J8y>aDgU:sًcxs;M>kK~ǻقqB?g3[DQ?x]CƐ_هn|Z1u@:/&"g6Σx1vv5jzCIOk=آ!LJY79 GU涼7kA Dܚpmj;vzC8LgM|8Sck4 VN!{9#ώW2Ͷ1<;*4zcc l~zg rEפb;X5)_f]Eѯ>8? iɳkGЖbF5eFIV+K!xyW>o-^5D['rީ罫lVvm#JC/sݺڸmзmZsƨ5zD>o})OKsLkd䟯-ɳ L[-0nkP!}<k՝9!HyC:Cѕ&mxse)9+eґBQB{ qmTa #U J^=w=͂l_ ~iŶ>L~Z,? ̕ױ$(Q,8(eQ1B@#p*Ay^X=.YN9m9}(+b\9F@hEzY6A20*q oQp%,/'[Ccb,`6hJ\ŃFi>y{E1$%e})p2Zz $p=W Xe^TC`!4}Mu 騱4rq:Pҕ*5PzdQ(X?ʛWuR@Up,v4B2KSЌ;c D}T\0F-K"iՑ[s \YV)uQ. OMHs`1X~.6*\ ?߭>\0[/飿~tM:tZ'F]^Lg9oww)\0n;7}tع߹_xL'F:P"<?qPJ(ŵ&.7}qMZ 6Ȅl+КcpVz %  "J?B˥qFy NH c;P7e0%MFm i32}ؐ h;aͰkqa{[M{} < *5ݖ~>Hb*zTL[ڢvy6L] RRYSeSIY},%-bET#TΑ7S[m iӊ)Vxi3^vւyR[U@5Dg}*ގ1-+4Q}Q I gl)R`(=,m\x%!3seŒaʆQY)}_o=XUlF q`3}nkuaSxWB^YiGF:-"},,6YoW.G ?9ϫ&,"u]f%*>Va ^0!m`dz)c2ZB )dd`|QIA.RKTg"-U*fEVLV"zZQ}ORQ/r?>-\8 Y*@ zce)1m]KS_4}B4w>N翊Ng 'ɮCZaXKhO&7{$j"U Ŋ?/Jgzn⬢ڱ7nlJ6lz7SIݚbc:MQǻ\tjͻ5hwkB޸6P7V[S rL3xmb$Vtݚwgz6,䍛hOox7jޭ)9u1HPһ5hwkB޸nSZ[!;jt^_ڰZW·o^_[JuEJUjށPdy"mGI_k%)+R RBZ^eqfԠ 2Eӌ@ct!h{_(T˗eвG I9+WcDnPq`1,EF  4Cψt;Wv>Օ-8'V^9u7 J!uO1Zu 8Om&!O(LO9tMrt(.{7 d +ћ3#[frRshI PKvmI `b-9!I"(! }垄fI͖> ׅiB\Ӝ#F ^@TK4ڨ@AQ[Zc P1I}t7I~1I%tgdߤ^v3r.6oͦpO<ԭ쫐= AԍuvvGA,[3mbndR6ܰAKvxN3k?f)At0-'s 5\ʎve hGCQTXr."* Ah\J߰\Dx륜- #㸼YK%dB$p"BOKrt'ďWقwI+CJe4v j22姸Rc\v( %sjK&.,q#q0YQOFws;6ɩ V $5|DO :@N$5p*r;i+s :/t/4w;fLs`9ׁMo9W^O-r+. FK=CLIVUY(NqVqv6noa2f ?Y lBVWݜx0$>Po5 *mE@ʴ2tM,ԁ6 }hEŕ`BSxF! Rpz w "8g#pmʊe)w&jɘ7!z˔4Np`V0U:AIjFj:VaRbΫx!7lXՍhR I.H%bi:EFiEm[|a{{cqw~v[mowoHgs'_P)U֟F k('R{06Z3Ff;V=O.H"g$gU+3dЂ9Q6]ԕ2TH+%Ըg.|l~vq]5~qpVq!\HE0Nh+G]W!Z씖\qTJhvOloߥL)ԺZL{ PMߕU_\apO_f@nUՑ0p*jq/8;S(pZ'E(*E׵*\otu-77\'%g\QRˎ] mzxKoLWw׷L5%֤39%k V४k< HE ̄(ЄG%P:#X+V#t({7_?L jױ4;S=dɽi"H ]ۦ>w~_$=bHÛsY0xM֡1io%oyyvWv5|SѪjj}$X7s~vWA']n__`'>QBM4Ǧda>ܻV[.)!muVߦnMn%,䅛xB? #FVBe= >f$t+C]&j`F 6[%@٪iGҦI_ˑd(]UQcV_{l(g}dcx.nWfPOE2\ŭEL _tjo/oBOou TB/D5+B@{_C6uJrP*ĝ2])E׮lNƒF'v{[s$z瑱}`ߊ{${jȵS\FڿFɐ$~5C+t~c؎! ճ*1-s8g(Wk|і0]'w7MQ[>jqDǺ'ij?}la ee$Y^m=>1?y4W[#V^E|v5-!#Sr#ӳ+[RphSتi4?Ξ0rVd7M~Q,^Kt`.މj֣ImL aCiG$N#JXӒ}~z٣/2̼j3"U㸫_]ϯ.)V$Jy[\|)mskh %'5juIqѴP砩3&C̐2.} 8.z{6 k]=s!>9O \0H&5CtJs҆/6}L)oK 8vi^._9+lnޅ~f{M?=)w1Lӕó.hһOMsljUИ_Q&Z[uEM"^RgIP8GuZXG)9mx(VCH 0,twS|G٠f@erDBQM h[EQrk)ؚF9Yrt\,+ӄ8*jbiI Oji@BM#:=d*/_w_g-ῴ*CCL|qɶ?rd>wjAX,@ek^ xjљ;ȃQsDJL9of^ -&AkvT4׆ ?4rj" 9k,:ǘh"30 MkruƋA XzZrcZf0 (N |ħFsa^!LIx_7O)0&8k$Khv?^j "bX&ec~8Qa`n>1`e:5ќˬig`V[-daydMJFˑo,0TZR 'D7T'^zzF@K1lvuiCTU;= 1xaT_ŀPRWjԔ}Ěw;d RmdTN6"S44o ].?i_rL%,䅛hM s[L z\ RL'CrSUhg4ջnmBs>I:*31H*Ơ L8Ud4Bpm۔~OHM5]MȥA U{ܟ>lv.Y676훯wwi`[wݻkwMYN÷s\ƒOk~wi)o(1x!j P L4r 8C<7^mmX%.Wp#FV !N9 +0$&b 8FȹUuӘ6|` P2X"&$\7*0} $"&Y~ۆ+ +qă̼_Ǐ rt>T9]:Ltf>f?9G<w0zrd!DždLDSP!ij 2QӚq,%6sKM$؎.mClˈ7Ksjn@YtO/օH?|^{yӸSvIt%cz"u'͏o^qEr0SH&ׯ?+4x5W? {6!@ Cgϯ΀l9Sc BgtòB[3K~C{OmO˅d]==Qa|z%/ 3ci@ Q'CBts,KxJP%VLX0 R֙)2dGBK1rRh`]pFIHQ@=e KXLP3 Z T+H8)bZ$-Ї*8N-rD(8e&ȄN4y'3T,'0iwТhs\- ,Jg߫rwt\d<{|; `G+~4BY ěs~RZ3ra^J@v:adƤ[@ ؠeir&;W --!lSa4隱H4JL`l2q!( AI)c!OѨ|TbK,%SZT>_;ly=H`1R$Z Qπdkx^` pUK.uKWg7OڭZW~:'0dA7ʁYM3ppEw˞xp'83=ჿ3=&RT\x]~A4C0W۹gr`?-C>qm(5 O\h|~1UQo.*5!Xrm~n)bq'Ubss]z9f4ݐ ѳaͿ`JѾL>n S@ϐVA`?pG)"-"]S %zEUym2y.CaњK@hhXoyx~i@6Xzo{8Lg `҅* {3G=g:zG$Tr_E,y}Es=ǚBN-6ECM*;H3;VpFhb 1JQc@D g0 1!c@t"?I4 rO <ؔ2DEZ B+A>.H&Qhq"iXNhz *;`yߍ w U10abYĂrV|#1OXNC<3%rR^"SQnT0u?P3(%X (0JPB1.2\$v4J'IqVR(D*J3fIQ6 A8 E͈L5?<5$2\.WkD%miс@sYv5%"['% IZ t: q O-–=ɸvV g!} , dVyac!g5ִ#ڻ`s^lI߃x>-G]Y{l5І@IGywλ/ 5VSxܒv+Y{"j8 Ǹup(N[GƋHgSm]t7;9Ej7#{Zń6ӓV(Ӹƨu1SOnOaXҴskDEc}bvK%wvξ :gf\iGVqUêԪ阣3 1y?2we=n8|nFyḶ+.YT=➨-R) C##'Έh+DF'w!E w;K!bmfs:I1uM}&we Łzkdru6y4x_˙<"-|>z=zӐoeo^@bX'/4~j?Av_,4;OAsFn'|o˓#؅,D ;X׬\ hHBp)Zs|*PSۍYJ?߃Wޅ=)85orB=>pvּgg8XzLfvL-ɫ 죖>IԡN/\DdvܱF+֋x weo2NTWrO#_޾c>EM4shX[PPˊΆޤYQUX39H;ƌ}8TS{w.9A1G{xk7T#ޙho7!!_Fڍ>-щv;: -enʴ@5SRxhkvRvUޛխAc$U( ӸAI0~`tߑ/":hޝVBf9wQO7xDM`FpOaPefr uWܚxq2hOhPd֢z cB[m|ڒ0Xh;2>SzSDw9#:kBc"T~y,Ϻen5T Aӈ~G ;TU8 j௾RY@Hj[bf0֒5 ׭_` ^d'C2ajRWF܋&cEI!jgs]بXzB!}<KšKb gW# ?X΃~N{-"]W]B\&E/Fljj K5n_:_ye*r |A*\*K(9QoԡPG% ᕒWҴs+uXCdXNYRͻu#0myRhx-NMzgCY<=l s58(L`4Z ocjl8x<epN_]|H&bωTbw1#~n!o{J.!BvO9~ۦBfB.ql&Xdf/ y1. 9=1LկNgցU3ߟH#Ey/{FE/ fr0LY|_,%4Ee@% EH:#qmM"qaMs}\y؎.l2 \#ުfucngʸ͢;. 9/Ow|壢(+, Q,Kf'$V,V I$יO_j1Uj#(pͦyUysI}ԛk]"5v:mH+hlȉRVP6U>&bA{}wYfQ樔l;+Ba%M03(K2#˔ J84>IO}Lf&bxu&:`;D*8婀1EQԉ݇\b,Rv pIspăHøuC`a:~Sq؟^GF>(>%6/^4zMcuZJ,ht$ c5A}NCBKlh,`@f`gZFcN0)A38 g!/yF /yOc= }C=lk3bOTo@"?_"]lvD^|SJX_b`_>ؚGsXM L_䑷8 +XfO,y< G" f cbG R T#4gs86$9BSa2vul⬰`1G'w'":`%M,IJ B9%eIt e&9vdY59id$ea0ZB9OD< +T s::آkPH$tRkgt}err9ئS,P*aEE/2ͤHR-'U*Mq(6Qf4[Yjt V\¢+q]h6 1oR_il= nԄsn~| HCVͮfgvE&6;XGh"<,ٻm,W d7rrxk` '󴋆DQqUA]m٦.唁FwK<<<<w{?+|\3dbW-?ߙċg <띀%9ۃC3ɇ+['Q*' J4)wdX]с3R$k0礌UI%X *WKn+?xϡBF aZcSq>w8W`X-rd$˨&Fvm" FO]PAb-y"uR;maR%WX=Q;VWpT#o.X#;cDp$ʓu F.XrSa"L+%9`8*?g@v|o +a[uYV͚$fQbrO1" KCyܹF<:HH-8a+'_+ ^LPt!qZIZSe`<"Z-]Ԝ$m̈́ߺ%~q#DnDzmKc܍&1rՐyn|z|o~^.u1MpNAp=e_q~2r >پ >4B[agW? ڔǥ+9^^R_`Spgw. YlUJ 7?}/Ƽ0dQ?a,Ub|q!/F% 4*,4C0"VdfgP3A_ ~ĤTkSHtTB"@+*Bg|SH)C@R@vi*wFF<H+AS1BIx)FU7(~.%/Il ew*o67ab+`6`c٪w+Ι8 i{E|alyBY%}Ud=" ;Ev#AHX d7:Qbe:V4N$60ٮ֩iCFqrP&$Ryӥ}S27'[OY:c],vPksh6vu c:TƸz J$2RjPd<)`q5cAbghG+UoOjj5x2qN F1V٫.~djLڝ sDc`T7Ÿ-5@  \>f:2S Λ,L23.?hQ[a*K3^ݿyAd3 c߳{3茄k]d/\Y'3O7Yا(kizmpmc7\wkx P*}pxG'F u4.і^jc}sfq--Gx re5JѵZF3z 9g[+]Ž\nZ7 Xð#r aH󾒺^LӹrDn$.Lt:nh{}iġN$N՝Y.FKZ+1$@w" 'b& *dtZ{~ҋ;T6V"J=ӥŝy ]t9= S ‚Ͽsjq2AO1T|VIHr!zf^}ؖ gTrVsΊg hܗC`]:#MJPZh{|NH~==^hiiP*Q,R@*bDabffhiLRdBae8MQfSܬ\ΖDxo,9,-,U6b)hj$H8![)ñF 1$gLc . I@g5Za֏d6 )ԗD=MV_v.+fw}\&j fb2aVnI,\,ڕYΙ"`ՠ:1 *\!#"|OQȎl!` FPaàEW?.ki\MYNy[ &7Հ2㶎s}?Lӽ%$JBY>?Eۻ5Pn_ ԼhBG*_j 'vNVd/6R_Z|KAIaM]qrkw޷ٵZnA6=f8F[)wNMQQ1WuDPyȹDR@*4Ő&2CDVH,~,E]ʝ &?r:{LX@ɥ|=aXBRzvm'cyx,章xܤuag%)=A𖔦#Jgz/ 欒">=C⽬=3}D+C9W \!bgyF!V3˔(G eneAo*A[9+u.X)7qg.W@Tjb(:&H%)2X2)4+ف8 2G6'0A*}΋Ӗ5;6(9_RT?y1Q1odC2y_8kD J&R5c"]v@v&R-'`/u^g3C(jfo{$P:ܢrJ];@,|Qmuh%~T d\eI*)Y\-DW#Ѝg.=m@!! YWN +uJV!bF;,WZվ+F"UOU^uYm q.XTP5eXIBi?Viw%՝SZ.T-rukw^1,yRɏbcnB:[.7` ڌ7!Фa[95lYnAG(}1‰W[+$خM 9dY(} ǚ2Q; pv_tGK _6҄OR  y=jr\DABֹB =i.P2!($^MOp!;r6j;w,}*?y<'| \_+ɥ+sS~wJT7ƀi嶝XQhہ>ʔ-n'Y!픶0`$C=c~Hu?$+Y]~8O nqv;H3l};)ݠ, -OHĴmkaqwZ%}O+G0[*צT\rA`c:NQa\\׎W0S9fٕ`Hh'֖uSd{R vi/ GҶa'px5i pZZ,U1 f@D;q=j"p'UW2܁nUWp;<|6nF|.\#ֱ֖ Jf\bs)Sp,`HXfmqJFItڱ֊g/^0z d s0(fAnXF9VpbOL<(>? fm\?h}F2Y TFb%fњ:ABe|oLw/n2N֏sAw>w[?K^{բ8whĨSj5c; `r -Jo ̋5V_> yU~;'cvr*Ώ+nLg<ӘTfvq)"QXp &v(P8SS&BƐd ^!d* (€0V,v='i޺+ӈBXVM S"y ϺD;ak%XL:)T q%I! q p8<"8Dьrk`; 7 2Τ1Ng$NqB$ҙDQbT".o7i&*2Yi'eo `;mf1?ŏ%e-V3f/4YOiRµT®<8$"2킡@aD7l5NaW:Fna_EHjjRyPч#ϸzTNŗqx=&R_NEKsj{){Փdˎڋ^14Aᦩqg'A]oj&#"TMF^`scCRgo{cwUF8jY4r1e!T747[7>O$0U!5mwpI n?6"(yFB!SL>" EaM>:\zW5Ys]3}%!>RL LX$ɀ#u7 )0MdB6ܣ&BWl'ڋ0ø?@A8 7qRr&2%Zǐi„QƘ( B"Q<11")Ů= 8GUE ɵï^߾`ɌЬF?sb~XOdvfFZSi~jcbfw>&2UEQ՘hC-u1-_OF-/ KdW$ ϡ14]GwgM@VQgsC ˵a[j 8eE,r T*!tQNbָ㤶JK `s]H&׶>f!ȜyKmК-6|2#p>hUᳫUl(!9Gg  WrW&Oh_fF!uX! qoQ&6Ef3򚺰+_u՛3?d7./ +8%h)5BZ BlPA-JpЂ.Vc{kJjHHŘ+y9Zz(N&3̮lh"b˟ݼ ُ]S7E9bÍZ6SdPr2.Shׄ{ZǪrgBq&QQDÞI_lOhs' {.1Q4E_`I i=/j5@ |3"C(S9N'iq.I$!0]ݓlO ;xO GCDhM|- Z7x@QձBF8S=E_xQC̗z.2"X3iv3uvX{wk?y@0y8f'&ov0~wNyR4;6`B@/*Pl4)e3.?%!q2K`ġX`t;d:ve;>NMcp)z*rLJu>iUJY^ Uu6ޅ h9eڐ> wp1\˺c.>V/oMEa]P ґ3rc+ B˒ZR Rڈ+2|~Lt)ViHЫчein5khKCLϏk?%+Vy6|soWȶ*uRD(ACBV qu D!xZ*72z=kRHz:T2*9eK:r(+}QA)4,ȫ04J`VDp)R ?Pc}FiOp'K ࠈ6HNSre!ufρ8Q5x1(#ɟ(7x21΂؏+n9dؼL] |^?8*'nl}}q9_N'ӗaQ"\V].dM;Z7t$tz )\bLR^Zir:WL9,%K A+@pL[uKCq d^4H4tWӇbK.ORZtEU*)ekN]奒;4ĉ1H Rf}q`\{7Scm8h|l{oH1A-I @AcQMyZ4o:X8z{ VJ0 np^[%,$"ȡ$u$Rvv[Ã@53hA_=f#lQFۯh)ھ>1}ahv?ڄ<.q#qiȻ; yJ060GWx2FHn>RN},!,H?S8 ۫G%8cv:|FozX!-gj--oNnjP2)mܞmnrp Oerن1$ݸ9@gN(_ k/Pϸ|Br/to ]}nR+-+A%͒~m[!9EsMƒ$I(uD$H' j hĔE?smH-.?ZjX I~~s=ZՍq #LI/v4:GY|Z>wlo%LGS2?]G}v;CXI*qwpnJBoU~cZϏtLSo,Q_,UN]g|*Wq-͐g>E攔pݍb3uRdw;8TozwKh yS6"{R3!N.eBƼaG&$cqgls*I5%fb=YQGЪ,}=P2Zi(JLQ (&xKq;K5+(c݇5 %;(&g[ERcp:'#W_cbxm<`ԌyF)sptA) -9'%ZDKZ$ȸ Mp7.5QYw7Voԯ@ 71yJL=Q1NS֬Q~5[^ &>g r0^=IQ!TyPs:5:CfZZ܉zҕ &!Ƣ%vy( K4sFz܈= d[u)ݵ1FuK*~]12q Mfبm\z8A`%fz 7|'^9Tpk "IjHWJ+[T*u淪 RyjV-p`ePQrKFϜH׌k=9۷NExn7_qkvEr[L_ ]]v0Ejo?V+%Gg8K'Mq:-RX pxW U=7nr5 +%jn NqoP!i2 TQ-TTd-IT{ Yn^Y(K.` /G: Ipn)׳Zt܊(#aHj:RcALd2Kl7p •J1"lmƖۈX=[~@Xf6 JSw^=?5堤r:"A {?\dΥvnK{8.QoVyѥݱ(y"JCn(}ђ'S .UJ=2ݧB zn`P6EZ2E/z$T2A*'PzSm'8,nt⇤A{s'>'ct@-(@X!6>) \y"14WEB)ԥu#7q5Rka:m*r(G^8q=F;T))w翺4'C)LOr1_Ȫ)xď!š} Ό0Ǔ@d1qw]| [̈́*HFй †n>QnGXc^]W=!f$>mt;Yդɲ7W0jhO7ZݰC,j]C/WCf9}CE"uȿ,莏^CF%r~;iެM<2f=Dh$a; >cܔ $ZMh GN:%b Y\C Fm=pޖdH^ʖʄnkeBs`f,5,#w}Ѕ$D} ǹTMjJ]M!d*M-/h e6R0 DW=3ܪҖq׎ۼ( ¸ϙUU%'rW(eaD3h0st[Pىd0HPRun4Rm9SBx)u,0(Ycr,A\ ˆ RkZY5 p%*=Eժpm|!P渕![@dxqPLvu6BJ'JD+S|XJy՞+c}*7LVhh?n1>W U_]^&+8/ ::ԬMQCc'yqS LtywsCVIDo0Asg'ֽպcûz?>~z_LJ؄-]ƦOv11JPlնMkb]W$T>꽫f-F"i{6;Gw. MV+QO OBD늶!@ҵթ.3ژUUm=oھwj! жћAؑ?^jժNm>^jd7(J>ulhFj_n/ ;5,eUE>9I䇟!Z,foIRGbaPktDr~Hڳ•lcMkPz_'m؅UKtid mىso`W1ti)¬°~cZ^Q\\;ToQv&p nmy)t[ x &y}_t'Rl|]hnۯ]6~(p"1! aYWtP sٛP糯򰼾~kO!h7]1bv!JZwZ.)ogנxo0'4Ҷ0 ĂN.Qk--݊6g]ER(x<@^Hf![GM%j}2p(hF^gm?zR;/aT-DYWh1B;[}ox&;|ʸdRI6H$:)/Cŧ^+LO]Mm3s  OjdL2%zi(%ӧDHd"GɠbpT")IkTbt@Ѳt^čL)SD(%j&Ay)Dw-bz爗"- ;Hc*?_S͇xВswyqӥN)H# " ( 5`E`+V h䔢>+jf=SJ{) f;W')HZMJU+n^@C}s7X/]s4 )*qQٕsY{`R_#~u*V*-c1 T,+(GU2Fۥ(s/)Sѷv..wubl5HmȖةZb/3/>oEWX/RA,Ƒ$Pw#L`o<aSBh<[`D"v(ѷt">n\<]7xĉ.Qk}v޼X,pvFN.Jq_ujLYV2f0ɟOIn3_BWhRjݲ;@bmma0)M8A>+N+/p]8lċȹR2PA&07 ;]+9{;w];%%SD*m@LNO6AD2[I *[oۺw198ƒ]khڏnXxYs믿㷝[UHW*5t4=V]PRuv \u.&vŒJJma{4J`nLZ z%/ֆn}RcɆ0zMHP(uB"#wJ]"!;:|t=^~!o?hg{` ֪U88ZȂq3)˂sNJ=)>q֨SW7S`6Ry CNBȁ3 Ȇ9I{RSO!E39S$FdHߑrYv&/1xƧ(r*G(pz"N&1]w0ufYs>=s!OgD6AU[]\>zjtض%r fZPsBvFaOF_!be&ͺjPH66ͣG잲020Jr:z891X2DyT>:'\tp]OVNF0)5D,Iy ^,$B(I>C2Eׁ:U:xӵdo4 2]ED[{fbu`.~ew,ԅYIߢmAE n=Y,Ҩtd126,$a2"A< U`}H+1 3{;JYD/ћtyZk %C|B9@JyTR&I!O"(ؔa*}yCSޗ*5fyL'cefʧ6\[:0(%Xd#A͢EgHY;PWoDӖ=v`#j6,}e"&▲LG,DL"­ XxX9* E c|*DHhO5Muy;1L^o,pPAV)WD 0]X@DJ=I 0)X % 'ryaSn`(BLX~H& 1>Z 5 ;U9Xk 6iAkdC$`.S,96TPPu-qfH#%=%~K0~ 7}bp+nNblֆWiu3cyPԣE%xw=JW:ߖkdE@cidG>X](34z {Nm8{|FWMrҁ1^=PPL~WP嵯xR)T{'"[}݈0G6X yF[OH EcQ ju ;hrgXDn?v62PR*/a^L2Dϒf3r LD> ?"8 3H[fQ\xC-r\Ga _0uK/e0`]P3gQN*7!Fi'e4Lo 8!As_4Ɗ*oEn4FhRo;);~go W%o?#!4@FJ5xBw%}/0u2' Q*j,k)bƚ@ROcߊgI^Im`Nν)K PRQd1XD9Fp\(EkQ@mh#@$FQp@C ,De=(bw:qj71Pp{CQey}@'J1lǼg8EQc\O(r0{TǺ<=IъFLm?]p˧IyF: 7$ܒ^p"qæ7UA>3:t'7C))):aldHO6KCgf!||Izc(WB'A%48WTP8"S8bJ$gBޫ޽D:s׉9NnNf2 2-*Eq ѮF:3",)h~xTCa?|@-,Žqr4^{1jT:̥yXfv0%,O%&r?|VY~d'O~[P=O_ZT=,Ĥu֍xWyP?ŲbӴ]< `'ȞfDW@E^Dl\MS.dpIH{߼&x$\6$s#|JV]\5ay&t%ngoOp, Ys^.xubuAI }u+TOkYɦkxL(JHT25XyHY$k20I>^-w/ kL%x>8z{bw470l9 /×{yUV`#p:8B}Ly[u^&k;}Slt%ʔ&sOqLvK0 OyEND*?!=^&("yfǜ䔠1byo\ۇdy9eR`N{/ȒƶGHFl5A#Dr.U5" ,Zn5D56DPmfҶ4HgPO9跱) 0OH굷 yΣgC$1%*UAt$D4]#̦,"9jL7{FMi?r6VO JE).z+"ňDyJ`? aibM%&_O& R1* aYM+CKUςTrU#.5b+]H$O)RDQMM,$̓f{0٩lxT" QO2} 3K~w)eHXGc\rhD< E!v2TLe 9hb_l+9<!y}f<\q4̅GrOmfh0Ԟʹi ߡ*FBfz=cXe|*cފ^4lłŕQpG01 f?Xe~hmRpUoYi;G{?.mw^ϊ;X+F*|GjRWRWS lN{>'pިDg!Xh$f5^N=(r`"|U30 ` ǎY;c蜇%`n~.30%#nMKdTr!A䉭f|8( E2!wlYg3hzO T8w(`B-yȓLLг 5o@}5jYα3VoF*9>ezv hMBHږn%,/G"W6fkY{%^Ҝ.y_Jw1wUttz7>,|n2 ~1mYUs׏+)r^¢D)Gz)JTM~6OzK'Z͞cXV\9z!4*|; [WLc4nGf$`j4=V6S1C)iN uv(cne1vvS[缜\ۭO93V8rW:}Q97p?*NWG'w04,?j73{2])d]_bd%MM%weqH0;SD"q9^vN/'M&UE6=-EPY@"Hds\%:2&l Ǯ)FC&!E8 F“WXP @Eز;IN†dE$z$IrQ(7d#bq6!$46m})i}lا9OWNmOnrGp Y4;dsCM񾫺U2[jo/!C/I2fE@/S*Q` :!<*c(-ڿ_8NprROd5gvfSV{]GeaJ:|/玖zrT؃+67_\^-;Կ/mjofjKG<㻟JSwn9\5bߨ,iу3(sϑr %ds%#l,fsl!{y Mܔ0|Q j^@D|,H-Ahk#9EW`EaT0(<# # zo_MK=Thskd-MKg#kҶE8Zzssj:G˨½-}ZJMKIV-5wBKס>ZFm^K_tU֫;Jis{-}Z*>w4BBky}~bn?xg)ĵ97Q4U`e^p?t޻!\2HXlZ9|M8a U> =ItA ǛdU~B}Zv!uD/@@ 8T.ސ'WV<QӱpQi\~ēkn $g٢I]kÓ3`[Fx$>(yez'[C\G|c/aR ;c6%݇2t`JUlpc60 @#h;=C}h#eX%Tj'teNkmR{Xqf 8?Wv9]v)_>3?>~R6+x] 5@8,0I1(W"uJ f& ))pu?_?G(IxMo9" O~G 4Ԩ(5]M]nʜ^gE[Vem/Dr)WWtǔe23m\_]JU~eLN/ |&I/ lVO^jLkۛ溽o*9 7_n;1JUSKtRk?]f|WpfA_)uyr5fH։X)˻'\rw~b9)%w=SS/|zhpJ,ٖ֔9M%kOaM'KhvuаSі\,լc3%r>A J1kErBd?*hdaf],BZ|Lq;x.ODZ1 ޹?P>tr:ȶ 'o.үŇ$(kR\y?Wb6xAX 4ռAKztuutֹHx(\?HzS5 la~;)> ˟kvaCD&J=BC–s[S!B!B+Dى]TT2#Ř=@ W]v"60' A(p>¥fuEr@/>o*ͽ[i&N%2 loWM7_1rq{q>8wޠN"*g漶{2O,EIr1ϲ;i :P/d !XK"ٚ$-H&/7nIjU>},x+#:Vfg ZׂτPdI/:^cԟYf1Tk)g1SMBNzEQJ0D!5iy/Mgf>\^wyػoO1Oav2A0_LF54c/ӷѿCR nG?~9I[k?O ~Pw ԟ9_˫!a1T%w*,CmE˖kC$xK ~up~pvcUϱJW?5@-pyFٍp֡>ZB-AHﱫ[jVIߠ -UMKg}E- biK[ >Eimr KBE)@ EIC!uyj+%nDSsNdP Sy)!x:TdSʒtR[Gq. &AS2ʑUfiPd71w]Ki,Tk{ń[ZP0T $ktHpX̶ld v`CNɆWZI [ֺ 鬽3FG!EpiEmqJY0-D(M#u* {a `sQ.6M>:^1rb{$i1.ڇ QBjo0Q[·C*gV-x:TViYyz:˞>䷆&@GSlp ,X:f֢ >ZFx ҍ5YwZJȰz0nJYVnqyķ5;*J>YzG˨5}}V$QKtbѽyYV7G v4t]k"4M Gaz3" *gP ss'Rt46-#Z$OF˱CYv;”;W^:K1BVgLgk;BC}ڀ WZ,EKF~:G˨9wXyZMKUm'qsj6-f^K_@hi}BQ[ZµT7j>B ^Z*! #+ ȹ$IQʔGrQ;aMvgR<5[8ۡwAmJA*(EⲢ$e7( PST:H&CPS%CEí` ]Rf Hn6 I3 @>zdpg. Z4l Kʙ(|0F ˢhbj lƶJ ۘQKC-x%f SѦ^hga"lIm')LW3F!@[9g>6ZFhd1agV TɧL5AI" -:ȔwSIf{Fm>"^ RzW>`$͞(39nlTJ2R`V:#ۻxLDYe!lHx%bqQu2 F׎{耖Mu9Qi1m ->Bٹ*F6sňwAM`Lz] D-#0sѠfn덉:G˨Q7s/FOD"s-ҚVТ*+Zd)\sԸUk]RbC"(z4;܃|RmeWi{ ߶.t=w\kOx|{&67?}%ipk9QCcu.gz;?\SNH&tx/jHɛw4cF8qb:N5rA_z\'3}&4m}3 C$z@W_WEWuy/RSwOOv/Y˓п29VH]ߎܖ)3(4HsY$Ɉ( iA@T$(.q dJƒՠ҄ 2B FW V̉w:Jܛ OP'8 T֞{U+byAeaW?o.$ yx#r>) txD+pgb$yhR' teƦogmۊyv4@-1LBVNY{p( &TSo09 RͦpKhmގep'gYӱdh4'faK*6Eb''br-<52<yy ,0q ~{CM8 $e/>5hšj%d)=zzb鵼a< Q*:o:gS-2RTx7la^72q1[ZGhk)TscO3 )4zxԳ.'\FQ8<ȣ,@9Ё#u3$:Np1<4ϝ wKbZ˛y]󜾭C)ijR)؝eLSh_YRUyy:'x{Nj9I+|PgH^)(AFv~&M˻W/hbFqsjrk>&m Q=E/ՠ6_NT$G{*:-Qj.ݜ.@9>N@Ѧ)9do -\\vګDнg[䡓tj$gvS;PՋiD&S_lMbu.GEݩv$w1F,JzIiAF L]@e+tpTuJJ(,ކTATWbL HQȣB6LN5VBAT+;dfaV* J9"E" K1dy_:2(z$Ԯ9h L [}cBX"O5(ʢFXH-8@\W+"13!IhV8x֡&4%z&=5g4o I=KlLe u^d0ڡ6ZXe؋(\({;;FgîpZÅ yp8dhr%0I22@bag( Y;%wvt^ |ңI+L)"x'yPfwuQq߮Qũ_F+g(^Iz5yϱFY FuF(~RY xk;C*=9 } af$/8rMXx/VR d!$jk9Q$y ?*53cg+*NThU0 \O!J$b2ņ5p֟MQ6R 즖)>He5-SzA?Xdkozh]Y2t2Z[**sg+3;Mj:r}$ΠV2<R#Ӛbɉ+ dr4tU阵 PT-oR7yo+Ɩ;r.Dt3VOAs- rXn"yb'Ac$c'Zcw-ɩ09^#!CC F g{Q8Z { ggp`umkD`~ IgK0 g É $V>Z#6Q?F5u^T XVJnV:yw h#}?Zi'.@Fi6vu~o{\^Nd!߹)GSy7Ӈ8w+!p%ͻoknSXwnlJ8׽>2v)q7n"Sb'CƆ0 a # .H+XnyÖeȼ dbm}A5Jy:}۬u=/CZcT]'+}V;_4 , U{ӇS!IJb(l+Wl) /zt(.(LROl7a݆U}l1eM+B9o-&hq:?.%ثk_)T3) >ff?puSa79r\K\|ruS}&Rҏ~e[S ]^n/mo?!WOUwM/ɘFNn ػ?N'c#V=3{c[Nߨ06(>攒~*ߡgl©UrczKƒ "gΆVDHA %MP%R0"; }qqw:Պ=ܫQ?ܟݬB9>i8B=_b*|r1 g{k8??_wq$(ŷˏoE-/&L>9r9wa { TtSd쒟UH;JF7ouQ_oXw4*B1J)=(e&To0)54n̽bl?;7zӈF|@Ɂ*M#sM/Y,9EpSM oԏe/짅=\V^u~š˗_Fk4BϋԽ{ᦫ +˻OȦHd˛pϿTKwGQA{orjtTKa&f:igBr, +_k*z}u?/s!Zk?1JuxPys'~$-z{\-"O)Y=*yhnKmi{n7]|$v&28jvZiL=&Г5 fQ3SR7JO+p6uh@edc?lgj[XgJSF02ƛ0dO KI{auCqF%DMceipnv^h("JSSllR5ь hu)(3IPtBDkBO:(äH)18%4PGK{42ZL 6ZZus"V rE:3cYճt3E[vjHI0J,}g2 n|)*2ŃοTEΜ$M6z)nU*_{X; &Xk#Nb)gr㡶sx>:Ft4]»\:Upr|"ͧީ;UOߵɵwmrMv.ɘSuFN: :1S 1iw>in>az>5ta/}SaCZJMЧ94SulqVJe^jn<]`s4wc,73AH7Qy⢻i#byx8oWiBN/7o(L 5y ۝n`5:%-8O9*-2emu4`2a~;ըIHuNFRR"h21l"s;MI(fR% #&f^F5' 2(ka#h6:$K ( 4)2ƅݩ&iTc-k2Bz(9'T&5? eYʪkqD`i;&ds$(3N9/V=FOFI- &/+#GE`jCXKX,vј`^fa[ԥRdVʺ$\j ̸ @s!:uE ,4t0Z&tbѮX͵@C(0(xU .T6:j=gLr? qp$(C#t-ㅌ2G2n8ZVG&at;ITn&589C4L 9lK\IX.l m(dry Gqlk֦ 7Kf}j#!V .'?:i m_ラ BDǸMM46*|PH⟎o,q{bĹ%cW届q!n{V a# y0w(y1DE:cP9ΙJ0aG!7H?ϟaNcPe}IBA ^LjЫr 3gLD+4{ނ}ykNwLeu<-a{/hXTjXzճY3g+?pY\|4cL248  3+4ȹj] (77αA v4oSf8AlnTb$d cK 8˝;^@'혢9Lg*Wӓ+F=x_^HL 1L# iDHT|ΉIʈ\@b|]\>kOuL@bYŕTpT" f(Eĵ4R) =֙ܜs8Fw~nz2:Uܧ0>48Y)V9[.uS0s B=;S Ķ@gRdZ"kki(z/Lzy^Sr,<ƌO #ejy[WHdDdS3y.}-@?/2MP#l Q퟽Rע;̳7*Ѐ:{Lmh:tL3xx34"}ؿlkD6/Xmidd/]4&$ Hog"*bd?%)YO"j[vq}7 *8&+/HJP>9cꊴ3UKL6‰Zs͏]]|\,?n tCW p c|\oMϾ߯/]m2nMזyMYkM7eQ6"Sb.%C{>%ͼHU"=S0TL1DV䕉4b炥N1s&s(DVZ$2JS{ޗ E\(U"O&7魃+2L@U;ew7M"i@~J|ѭ),e;2{$2vB̵7]<&~^uO SE1RK}#pwuV\0o3Es!0Н4|60.Sx딗D/Ϲw|0ўjirb{s~V'?ҮMu ;>?[Mf=|o_RjK܀rW^hPR2w>~gw^==)'.E,#D}UGZފ)s6lP ٸG3xÖ֨ЭQH[5ƣPRu]BQ*@s GPlGSmL"\M}l{q9kgj;Jx%p^CG1zXNǟ*['Ih"U?i =z׻Qk [WNox]IFyޭ MMG̀w*vSDiT9w8n}X7n[6%3mJI(SVOo82!+Kz=<D ;5:jznN1n!e^_nXRF;:A/V块;Ɍ?K :FӓbeӃ& VO6uFùI+2z*L*ѨA|e> ]mϟ>POFI4ҿ]/3vD{yaXY5Up$>`TнztM2qe$L0(* zRv9uq}(۽߻rׅ9b Y' `нGzC|oc1|ܥꠗC,P{؞ގLoznqztv>x/U]|ßn?~iO®J$?JR[e /E)UEiR؁o?U= CE(,gxB #9;zBjctt(0@ؗPԼ~uފ!ӌ <(LIf2!6sFHB:[V@V+Uȣw2!sJ[^ %sp5CҁUIXjiBj/ '9hsNXוJ S^ RŊq-mpg"F';\D+8ל90f7lp(,D7YpZ_W肏F'bUNs֕"j-ߣP+5J% S!(`ȲҠO>'=`"U`FgeYh g }tÇUG{ JTq#мP@p!i"clٙPnH ɜ>FŦBMcX|2YS1lpev XU;PqkC{ԴQ7/ H}WՄ)uJ?$RY88|i)q.x= GkxZDǀT- N?0 G'F0)&ִx:ќ7gn2)v{YJ:edWJ BRX;r]Hq1)S(Awb C/ըxnF9VKU`v(JW쨲~+=)Lymj׫ZO]X廥~UΓ+v3;qgYCefWaqpra]ow()ݷ\-E13WbeCmZ+ p:.䶚do:̨uj.ek >\>P ;Fs~XU'$لN 7tRF-sWX['uc]2s'>\,s\`-u4x?o 2'5a4 ?.MHl>Ngͭ!Bgv/UCsS9`Һq;z*sp´?=9$iQR_h RD_pӊN}n9N2%E<娒$=:ГN,]ȣ;90\M"í# d9%,xQAIvUg%V vSqJ3tp[XׇE9_]٥Yus+PӋ/W_g7_iVlcXTz;F?bsJߌ*ӛ8}t}${vT-4# {hnlԎq|YVrRAEN%ArN!%FI7zGѹ=*Mt%PÏwNx9n~KuxCHi k4㿊Nqqb~`Jo}|~":wϟ͓I]{ndosyLJ!> %o rՆE|‰jb|*gy,?{75#O#J4wXTgR>u !;f,sP &uHL!>8 Fl@u%LacnaZІc 8 MI-QR()%Z&TĢ:wQzۦ~/C"RXհݸGN uQNה{:uC*h-2av%?~tp|dwUПww\uUQgd?׷OMO\1Y=6jYgUۨ5Y]hTV/s1O7kdҚE)jEz֝3`%,/I_ \ ݘ<_LҚQ]YO{5飰3Gj[\ΕG:i4\Tg/]:/ܔss?ˮʥp7=,]UE99\V55)ny"/pL7"Ws\m"*! k^}̯L)Ja`|Vqc\v5'2ksRL 0Yve)`8gT^ N&MyJ:ln-˽GZ ,4m3;.s[C;5p3gadj_پe+5e_ 0`k [*.6l-foA%b˙[(QJ`5?Zjc=kT֮5pKV0Y2kgR dNx)J\`;%U ]/݋o]yxZŋ gWWIwʍ™]Srw(-WZ37/j~f^5Kl/,-c犞;5TS;x-Qs?CSKv(z7- S@'.֓esrr9c$fRU$CC1 \B&tE=L*LLIДiszRp`oU*ˈQf 3[D3fJIEA0xFH)*c LE Uۦ@#=gAP/^mcm_Bk_ﱁň{bɧ9,JNj4ҊHr{n'Nrw̥hA?cnuRt:=νNW:͞hDg$+8hϬKz="wK:%+r?{7;[=bRI2%<SB*4HLJ8FIf@%q$s[viZQ=s}FoǓBh8hS۫-J> w ދGeů~?.q T.MKN ]tr1uحB^\75ŜP.u*HēNLZyESU ޹4`Z"KI0;(&Pc2.+aZlTIk-&cl+o=G7iV^2k 4vJKxN"=cqGvРyĂNzc&q#'yf +'bE@ɩ/=]-{GyC,vǁRF伷^J~Rzs|QeX zl "[<֋omIȭ$ HܳooB5Oz*$VqǕՂ)lUq{ՉzmǽTFDꢏ'ڔsaENrI8컈'A,СK֎So .zA~WqST낏;T/&POz*&ZնbNgŦ?.~,yymn:V*z{;WVteiضoxxGJq@J0BHs% |O)SHAZ4"T)ʊDU%P2̤@TY l/@ybe>lBfe@+e)8Bg4+ Jc@)e2Ƌ\y[ ?W-`juoZtj  N)˷rhYi 8ѻڡ9/ߧp*$.%@_-]j 2*DZO&BKIj5tM @u+Wٝ?>m*O_ϡ؞ DꀾC %NߵL.o̩r_xvҺꎖ세8厗wVb6g*Ms>1ne`RѼ@&v]4L7BW}g0)KzUET7^]:.7MT{k4`]lG|9` ƕ>p+.di-/#qq~j]?vpƑMXO\hN=.^>K)ë!A!hUferIu% PɤV2.ۑ@ e'^؝*/wP)&@AaJ+ed 2 r %ˠ$ɩa> #j%PB\@+#Fp& k@gHr-f*dƠ<2[@Qx3f<1dSz"FR΍) 8X*1WJ/B4!NkJW]` kHwnm=tS%9pM/u/ R!e{2[* :B'D:D29~lK(\ée](wͶ&8')R-6VKqvJt(9%紐tef=${[ /9¯_ˠ:01 fBb5 s[>s]0UtCn;\ߛ_gfyCsKVsMAubf%ʀo0% @~N폩LYvBStl!:]Jk%Xw[֝%n,c%6j.ђ--t{wbZ=b҉!:7r\tw?"x2}KS^ mayu(>b0eB+;+A 9y}PӤV'>o^nutJڢ_s~6m Vw*<_jn]}B^Ajmq+?UjFgK)b'?}ޕ,<#tD-ƪDm[w2}s|r~;źN.rgj{ȑ_eor /;ܧ~a;ȶF=-KT["rPUE^NX^qZ &Ӎk1X5/~^=AA;6هzQ}d\r0+99{ۖUThiӼlƁ.j y畤0+H@Ԫd[t/"iwA:圓u~i8ECn2Qwn,f#<[ U4Dƒ8_X7`Be:hcN [xu!߹6t3$1#EeUSZyp5pdZFWOz:ɮWG ^G׫w7]3Lbp1ݫ"R#+v"0vJLm_ƭ/e<Ԩ/ЎTb{`݅# _%!C@~vڜC9y;^ C!bC !yw媔RQ0bdeotwW~;UĢ}>Dbzc('%zt&#RPqD#i !bCBwr3w6'8 q5_]U}qՇ޵3҆ pkc~aԘg?̳v)-d,^O:W*5Ne%Q7쓱N¥83.KӦ$$JՋոM 8{`$N~Omq3]sT5e(1,Mt=MK/%7N~ Q`H|.7Oj:e:,O _m}:x ?nZӝuX1hGx? /!ANsSC)} rcS(,-Cjna1-v9׬㌋a ƳuQ!@eJ6"IGI  _qcH'hwϞ)#Vb:Mg{F1/ع=1>+R(i. ZcJt`-i1Vi†f)I|rH%BoOEabBP`3.iEII-RJJ,<8D#;j?}wFc8ciA]BpƴnAmôٍC]/w1?J^]mIbNG|6QQ76Jqa *V,y(>@NJq?W=WךWуeA 1͘,YR 03/I%e*f\1ԋ.jm,'B8AkVVBfE;Sx`,3\ӭ/~ H>ʟYG >Rx\8`6nK?`"!)MWU,ri6?gYχ9ypn'uDJ/v8?oPHaOGwYfGsȐDtt{"~p;B2ڣz 껬ﭐa? 'yITd_,S[(ϋEjJCېbApϢ!}MCu%wzG׈`K2EZ0E DSL#)+$ tm_ O)0/MK5%R}Y4WYsYf)δ,sk^A (a|O7"`n'3R^lj!KIY(ʹBPd4媐HBjbmVPƙ-j%:[- fSX"@ Q2e3fe^RU9 ΋d2NEYHRiG-+F) &b$hE>PɰoDu2Hbe"{6@2 _D%xLcxGa._D $! v<ϸM`-$QwC1Ě>TҘKDKBfm){4zi w֬?|5]īoQ `+Z5vu!|Te o83+q))(X^/UFS]J!THhn|[M@y*YyG_SvZ5Sn~LZLA]uF&2lncG5@vC^!+GB[1fNy{`ܣyݙl 7hajhe2߮?^UZ>^Un0Ow,YfYr`w֒}[Fӫo>zW'P>[ N^,_ v"26r^|m/H)"72*6YT94W"BϑلdpԸT@0aQ, s4+Rm=%M+&CXo% y~P(?Uw( xx 4"ړXo7!ϣu I͍G\R.>{|r {##2\U#759ZQ,v[Ǩ!Ї!e;R(ru&Eb Q̔k2݌Xu-[V]KZyk_Q- !u6 p+ջ}=wUMFmNmc}-E ͊N a ɖ"c`-oaLj{`4 g]Ӛz_͚=S=;r#6T1 )=jc"p>ֻN8Ιֱ}ܝ@8V\ ca~uMVcutJT7Z칫:4r ̭ߏt=xu+ɽsfd*8 &Ë>2by:L_h"%ԣAx=7QWs|4󭨁iw(Y?޼Kv'|*SNqΡ/t-T>6턾$n|T-<ӺА\Et8 f`>-P^ c;ZnsrG-8w!߹SfL8Dɟ\H}nՎOSˈEzv~a>\l/WGeP,]~~umիGWnN=K6!!pN9O0!C at2`*3O[/,2)nҾ>#\胍XRgT'4P-?!a ;WW/=F9c˿JVǁi0AWw>N[De&Y0+ ؘH( Bɞw<'"L'm;kڞL$]g65wƳ>rzDnճUc/~Q \d<$8{y\b<jv],`,䱛N\P,x#k-m8 07Mc-=o-৥'RR ~ZZQ-S碥稥i)RiiEpK[K)R튏.aCT4&Hhk 㬖AiR-D"ק|ZVT .筥Xi)f^9q!IK gZH($M˂xIq3i B)8NsV@SӴ*eUqZ<,%)z_RKpqŽFE dLt1K =>|&.rވ6㫹Jc[I`L0ws"P ma>j|s2ϓddǯEIY23$OT YrNǾ,Pc"7ƻse_C*yhgRa-xW}hw*]%x ޗr$RTy?('6NqzWJJbwr:EbѹQTke85s)^R0lk Z6øRRtA.iZ6 );Il+1Oc0vFR<6MljnC}4U(Y3wғ?{ȍɼ_ӌXdgS6+m%{ܒblwL2RYbH`2FjmܟWpV~eJ|=>[SED`IʫfVl] Hxdbܲ9-[40CjMheKAqܯUAcܳ Hlf>X^؎}+,geVWLHV9xҰ࿶E j;G6G B|_ k5p0$yvN#Z;)_30U*1l_Vw†iA0)dau71ІɫA(i]4,E 橓=ꆍ#Fa790aIO[1 2Uxg|eV؏<80z&`LWR!)+[/& :ok>'|ލT~!_l1%\A Y_:?ͅ2o?ð ݊a!lJ1Z@j 13IDtIè'ᩒ&#3e+{M1ŖbmI6 $S8{@fWoW_uV~$&lnW=}U4@x+4 kt .xWN~31gnV75ng3g~LjR_W? FI`X\fjq\$':hMЩ_n .߱ @1 p17Ϫ{x^*to +B  g)<,/j,hg0-$+Cy/Rc^w DlqhXxp- ++B@ - + 1{Z 0&셖m欐.mҗ$PK `Na 'wt]͔kkn1t;ahP/:f#y)T:9Nԁf Z'dR Ƅ>1T;#({\x!X9Vîv5=盷GV[5[~ikۚiyNu5fR6?J@/(kB!S xKWЇ M]KR{iEխa.9 zn;&PHe pa)#? iTgc1]Q$;E!ڴјBHĤYD@Ӕ*脉DY JVI*[/FKc@@PwO4yy>fVX!$@S ! 6MS]btJ Ҧ ,<b'ԧ6C. F ˰{JE@4%֦Fx3u)0T}Je Ca.A? ;ȥ.<^G-ٌ=<ϸ ({3n.0P l+-lHR #+%cbea JuXa. 1'4?jY0@|BFci&<Иl V1fxQ$&1:6$v8pCg#ku>,f8I!x66!; "4#:f({ \lDEJГZ/Q߫ _U # \]įl+I] 9!n<}jjDZ:bAE7F5iNV] Zڃnbc(L '®o[g4kL3Wp2Z)Ď\33{W\ovKT9˷x2z{)ڃBN隺gyx70PHV,=Ű}1# Ӕ^,ac0xTCh%nqɯV?,(? 8ۭ]m~^YHi*5LF_,.6=}~.G65T[d[#8[u̕?:bcZ%`|*=[+; fl'ףy~Zk{7unroEe48@@S3 ;ϵ>hfSJܺ/ؘB3(']D/:DTa"D`RK]n UwB59<3@-5ǐ3EEH8v|fLFtHSԜ+\Kb[{R <Q<.nFz96G~'?]z,Kq~~q^F8Gj띗Bb1?beb%z_nv}|Ͷ7qPɾ?ER6~TL+lolr"gx٤N}IYMU4J(ܺfA-Չmur}->,кա!o\Et ޻ϭ(D0uhb1Q6Xtה<ƇŐ"蒔s:QBKa-QhHJ]=ZZ'r GRN7l1a>HLEA9;jT]?4ϙDAKG& ,B&-{)qD{9 tȩ[T0J)h1`袢RG"؀bxk {R8q2ZJ&z1*  ޻hbƗ3* ù +.[%{xOOIc ۫-堵̑lr(_ȸ> h/yV50l% z.U7UcU9 wx<߼x̟n]Ũ%c5F%EfkF2 @R]s& ԞU1p0Pk僊u硃 *a`XkE ʢ?_j@]ݖMPG==;C\Rs0zcI؄`AyZs=ڒ}qMsb1IGF9Soks}9uhW(:7#q3s& I~A`̨6*rRf.P\U4J̺Q-ՉmurAn/k@Vq=[(m=vl!C&HX|nR(I>V&chH*oH@ANG0a̦IT@j 13IDSv+C@=[\,ؚ|lj3ο|C@D1'*Bxsw'AhאKְ zM\nU@3}ƞB,fmSxЎƃ~cϸh,i0ss~ }|^֩G5wЂq;4[~0"ܰm]dnÇ"s r,B7DxneFEuxӭZ.F}%^AZnFi$mSN3 [ p0m+-_.]A/  w0dE|>N޳ݰ'1"K2> #kM*O>֙{leEz U^dup syQ(* FNpܠy(JiIH L?5DI@ДjQ|ƧȩF&3UScPN)"S% MpBv 7TsV+Sŀb[Ij&2$""r8=2/@bchcf/7w1"_Y)'@ _9z~;>fw'7O~/:Ġεdy%k[B %PW)V]&.PRȏQ%o^ gq\ U_VBbra㪴xeH@H#NPYc,V}fӬ\=1Gꝟɸ>e_s4/$/#=~其îwČ_iDH,MH\Y&:6^ w>I}j"iLng۱]E8kfOj/7nCQj&3A;l/jw!;R~5՞,ACH@26@1IS3J l(nHn> Q!l!oۧA\<}+qw;{7-(c8,ࠫ?o_Jw?İ rh}-O1 ӨEb- (JdaX/^ZNDSM&jXt@'&M9,cZ' ?aUfX4I+ZHsrD I\.cYveU~l6; QH^3N 1ae> 6xaXNQimLSvj=D wb8 ?&6G a(adr fw%K {݃ǫT-&:! $&Ր@)g$)%.3"U)B"J &Tr9D Ej)@#VX Et"!g$LIP1{=3;p#sW$|f ?} /bF@Yo[Mk}tn]C{ 6p\!ї__zv6.'$ǣHN!lU9:9q9)"% !8I zd>11q۞yq(P(Tkń$TH:-QD^%?/c˯kxr-0W3?BwӠ K`k"_ ?7ȿs] ]1(k@J)[QjїO%c=frړ]AH:q]F]K&9h׽T;LJI:3WЌO֟(%Aypϝ_|ۤb'y?d5@= gx>{eW|u M7Bio\弟KoMM@W76n͗-Ԁ2.HSBA-*E. UA]5XJxVٹ0|0WYS^ f;3ff+N-U<4BKP-i |#R`{.3ɗXzݭW aM K)vZ U\fd\̈޳aalj`^3\k yP9XԶi.'iZ58lnZqzk $wU/^ Z.Wz4E?)MESf`Ra"TUbm5G겡NYӐǂ7Sk!&p(ȁ[3ٰX`W]6J|ht-F<FS{m7L"A;*dK _x,PKvA.f]KNя tOXnXy,0ZPdʃ Xm=[mֆ? "KŰ f45\M4HIUUH"`Dn$69LHk-s/k)g1{; \.;qYM&>8Y۹Ζ%C} $rDoA5ݪ"X6LDqˮ(1j{\xYR]dF)4RGȹCYR p7~fAvV2e+wx2H]iAB|+7A$'ol4V`Ě4DmI#G} {`sQ@HNM w\=3jg^H-$w!p.ӏJ[G@J#2k:j༆f?r~!"㲑:'"?qb\Uwwޣ4%y sRs gmpu zx{lˍgmϋ{#ԄRƦZ){aE3ىB]8c,y)6IffEYxMpt |t&&dګ:6(sku૤u> Ι (Etee8Ļ?=폓}&u= f ?^ΌR;36]!0f8ZB;o䫙}qy_J οw0 {k?uˁ*o|!_b)b^ ɤRiIʍbYpX.QƑ"JRvZ92+YïS*0p9xn@&u_t|b0 $o&yd~J/:&ìkpҥ _يA-S7-nU?d"C3z*f1Bli2X~NASlMb8*^?j{dN|߽]2}>-c~{P;ᓺ:302&91+ J[I֣o[Lx-4+E:[ؖl!Χ cʩt"eRԻr0Mp1'JNzq6R g`JЂ ΀,A +|fԉkO8éch5a#8aeD[ BM-+ JD3&YijY.U$lpRF -oFff>5J] )Poit-հ)PO_&+92.ȇH1[׬+>exg}eڟP/@+?.F#0$6}[_Y󷓻?Q`ZK5ddf?1_P3B{dϋ4! z5rL")t't7 J  ۟([V#ۙ "Js]obsB2#|̰XNR872bT8ŠG)%7Ht!8Cyݍzs.xYm#\I`_=j( L7h`PU3Z$þ((G^Fob b2=»iPɤ^7x q`cYS޿s;;3G{&wwp_0'Y6NgdҕƣG@ وxg1p7J>N'#w3dU.҆14sO1}7T g@f!Om0!y\p>} :4DqO,Lb>M>\$P @"=dk.R.mk6JڃD4&xt$fpIas-d۩ʗMDZFi5B vh#=|\^'Ta1~ )IҌH΍DeJb- OpT1p43<0uL)Fh#5a^R~[y[{quo1uf4}>[p l0c)5~e倃QV+-$N?xXωv]/v+֌JVNlNSEfLK*!D:"&MLf迆'F (ZW]4_D!1Y "2 #rRR 3g,5ԥ9g>ɭ}-u z `%G@ǻ·JHN`76FaH@ЉKyIw"8I$JK5H[ƝS*1RsvL[\%U u`7cD7y1{D  1OFxO3.@F Bj&zр1jY9+n/2x)S+dEM\10!] =|)jKp{}@N_eæ@fsnPi)X{fQkЂ 쿂@z)b!W0-< i?% (D'{w@yR.M6|%|= <ͧ`0t؟\{W[O[.ck@ht=gZg =Jb$zcD]10~0@Ů{ #y#pӐw]ZΊ] ߒ5$83}?:6l8@Mt0$[!xR-q {A˯$G[V6Y0-R֩Մ4JOwNxO4JQ&nnN#`[;FtjLx:Կ2̀:?C N'58E&_k[w:/'нRj3qwk -n]k۝ۂV (GBn:R CNNp+\gnYXƖ&_Ӭ ZrJ/{ܶ K/[gςEUzHJƱ+s^U 2%xq8ƈ˶hFR+؃hj yctΟo޽jԊp7Rk@YjrG89\6='=w P/˫]*sq(7p{?f߻i?'0jYn| f0_oK>c!Еy:#pXU ^8yF!rQܟgq *DdWiv|a&‚ ?:tNg$M" \5IJ#e8.)RĢ*a\DZH)!X׼ViiDoUq J;N" l ,Ų-mmpUBNs8tjgӊw\poqг4o/Z;_uz]gV^~FѦQbxV{!y>~e7, 2[f b\ j M mhP& dKo4N~Q^vy%7Wdd]%Wxv%J&]޷qO.I& ܴ&1 =hku|pػƉ-F%U2ZR FWg|{i,\8wgC[p8?`?cWrB,r%p11lIIX+ % ,3C2ZEDǜ!' KU ",ّ%W #QfGKM Ft%LԚRwRR6-N)9BTZ(jV#FR0u`}30ik 2Ӕ)%F#'$@Zd{ [wo6ͦB&݃y 333tۛO* &&>.X!$ ڻ bޛo-tpw/km۴pD;CSӴRxNE J *ʗAyvPi!Pl%" oֵe6h!lS"h}nl aDP8z;2x̢*pڠqB/hl?+zwhiLb㢺;7^Vr );ݲnz۷hB1Qgn-ƶZXn+iݪАWA:U|m 51ZQwk/8YO[U4H|.ЖuuGu(>2֭.T4ף:x֭ y*zN)QD'_:՚i'Q?~x][r+7IAMh_nXdTdH,#P_TeQeFTg$e{tǓFTs4U(Ç] N#$d9-N+eI 6>vFi z$ a'{1WOB)ۘ“gC49Q(ѧT YK/ݞg8M 5h*ͮJDmh=}<}oz"VIMZV;YxבNvdAPM$n5WuA~C`c9;`_|wOHW*ݿE#360Ҫros9S"gQIjd 9lD0Wgy,z/x(q|éq9$sb85Quˉ$փsda#dpN9fBY_@HU5gFqvUPor^nih#|.Ϳs cs5Aۂ-85|RR 5wcYӪw:q#tEޠ7^kj&A|p9;N/ˏҤ X#Y´ʸņ%řDSE%R8,7jBH. s>]45APU#pUyH9ɓIAԛi`;uQ^ߛR VR`,onfe??oh府뭓?#x8G<{!)ߝS;|wE[ F3il\g2Ap UK1N[,Q\Ya33jFzv́B]n\2Z^胮z*_ PNikY ݬ`4H~sg:|0x\K+rfh|MƟ鸼Jqz*Հr ,=/DqNY+M~=L&p˪ol<"N *Q)ʸ1B1F$Q4T[F-p  D"MN2 C~1w TbUA}X/,L|w__(C*ULg/:`<|띀;d˙k,۽CԹL\Qr;nʱvQg6Y9#kS p:X9 cSd y͢3bg㡁ĦGQls\BV()t %h% .0NQ&2֊Yn#m1eRh3X8y\%jN*@`5X(\ޥ2>AD*d]Yg6JESyե3y)b@#YyBfxBP g@b,9q&`%: &Z /6!8"NmĤTjm)RD2E"Qbeƈ'$1Rg8cR !t{Aw(8 _!rO­XDH3-uBDM(ՊId2f :M)%&(Sh E}5Q&+aoejZd,E0-L:)iBB cԖ9 ~sF.8A۾#+ p_!pVsAKLKܼ3RRav)TNM7|H.);*ԺUoٚ_ɩU"\bOVQޞ o&5)tէM Asi]$V(̮yoo~cͯ Үl 5섇|L7{p=l38a{J.+`:|r-w2}[C>k:% ۂ4JڶuTj T袵2\ |||PqJDm)*E܁.B^ʡ(B_;QI$gvx& VhB{z||vއ]{F|?~ aKKS7[U^v_8xgT*e!˜X1ry],n drśy*.W.ZpFQ;-*W|Xi4ݠZ9X0 :,#4ô(Lsw, s{N7O?(Ym,$\ӿA\xicjg0j-$N2⮵/rpx)Adt텳o޽u?98x5):?nVdaF1o&(N],s\_LɢWk4}j 4a}Yy51tqۅgXHtn4;68-4wϒݥn/kyzꗇ/`pa=s{;+PU!\EtJu۲nwúb:(cZtfSNnᙚ֭ y*SRwܶnLwºb:(cEPNn/DxZ*4䕫N1Ob&Eŵ,N_es,aPs/ %1 zOmR1_+}Uрcw0R<.YrtZu$A7gQ.\WKLdJI.9N7Ӂfɸ~E6=eRNl6:ӟo4ʢx~sZ( /Fni'v%jVIbs((Xp,1XIoR؟t>-0s5\`}IrKRyF>afD3m-<{IMfmd2 /3-p\>Y Lδ7Sf!.tu3=؇/[3򄂖;C*(Bs 3-ZxMǶhLc # )UX MY`eEJ>F,H$qЍV*WϮl|YּL:Z/~ E o)M3tBm]+vwDl(%xjjToɻlY>x̷³Х_$)8zZ,g6X#:SD5M/w OZFjQKTad:ծlYƏ󻯮ia-B#y:;E=\(P4,vVL_dzx4͟CD|"=x$ǡ @qMKۯJߩBwviZ(!h4l7_7/V<47֟4":YZ%9^f: 41\n y0#w&3xhߨs~Ή1&yGÆ%w{Ѥ1^y\u;hRW^ !GY1|M|:lbGbrW4LO7\tΉ&'Z'htѵ6^1R?յ2cۼ3J9.?WVI$>onPAo(Z07W4 êڄ]$cv9g}!c 쿽X^h2 C&Ȱ%1jn^lChSP0 jrm_L?N@cƹR/:J= ?Ki=&sE?.̵M˼ˊ>_\- e0J4ԓYǡ3Imfl~'Ol>zC4#D#"AhF"BB̵ :J&МE2PE1ӍCUg}~fL*T9^~Iuwq폟_bFf&mUWie2={f" > ɍ.^&8ws.Aٕ" І'Xp! MN$cJ,"` tUԦVfw܀ݭ˸5ʷwjB("]jߡqJh_|bѤKU ٻ,2a}418+.[!Sﮗwa=Syzd( L!( ) *)Y@I,%P+W|U%BިMiW@5$ ig[a\Ď* +_ficY=ڤ0Ka( R( n \ $1$IR]HwM8[eNݹy\fV!v+3;F=Ř-n7ma4Wol15K2 9jbajvtU&Zxk}ʦY*O~w*Ѝ8G^,5B/asC)T>QzH۲Ԛ ^"JP}2(ECi&)(4 C@R,PzH۲ԆPz(Re(Ut(h𥗌R}üRC)+$RnIM (hC)D: ,bX/~'C": zχJ/BTqirO Ljp^z(8♠T?fRr@X3=DԷ[RF)(?[;1v(Lj4/~)9go,PNjdj_8J%C/,PzH۲ QJ7kw9JyԷe at(;է|@ߩ~.`\ 7R~(E>19@qZ,|l_El $U ͨg6U8U),L+t)bLg[u24UcH[gT! +L[oח_NDR%7531 H^&7Jݝֽ?^A8êXBvypXV1hw:}8X@ ?uɑd˴k cFH~7),N$Zw4Hc5<1I) YIF)Q)b8gH' Xk4j_{UȆx}YcU{4-CBY!2 #HE')]ЂEKy*bYL:!1-P_51ϣ%=d{x cSQ2NDLe` (F0v !u SűBMdx#׆^ؼ6! nN4 0R4Qj9J)S _VR/I7YիuFRd#pH& a+A%e$abwnJ%Ӓk [j<f :tau2eT@2fԞ׌j*Pahّi18_ښӔ B4Ju%t*Qhو 05(U Bi㯰⦳ݝs6=e&D&ΉsN@OE1jbC2FX3j3)b+Ø 1@7Y l&R(D<2|)'EHϔ `FrN{pۨ > e_֩d`{h-ۿ޶Q"=sd$Z$Z[v7jÍ^PRnM۵YS`Ъ0 p䆟KyUU/r&~xZ&(7Xis&<<]dC=#v?'d kIfڶVN{v?ݹwqȢk4jQ!~$O篼Y]g8F8~} l c=hе2ڞyn:eRbݩ <`i Z )}T@q[v7 C &)PSd b׿>$+>vRJw|oU,P;f9]jBdts," zb/.OQ:ݏVr,c?.Jw--"s4{6ZॐIac:xuS>:w#Eo䒻ݸdzd@SkSɄS&~Iʖ`Xad87eWvqO9s,2)k!ou0@gm{7S[:uQŻXjљwϬݚ@h-L:s˻IgՁtj~wq#ujz!z2H*dCoYE萟{ egRN0%eg- h:KygT)8[8_(5'w$p.q4?~NW}Xl`廫zz^>}ռW.޳J M8랰ziVW1.< E'U?kLOfd{DH-Wz{59{JqUPֱbwEհR.i=t{~Ownb4Yg6jfugŶ2'v4UDGnU%s?O [S 4FS{߹7 54\_Q^ݍD44UQT >]j|@=QVzSmkԐh䁶eͩf2, nIm 7$=Ձ]moTI'ϫ:NP?L'h!ou0Egmy7 ZW>Щ*ޭhTQgޭ>흧wk!o0ujQwB>c:Zk~R.C.vM57KcKCl|9VGFp5*4I>>smS?z5Lƚ:e!M@@y>W/Aݏ?}?X,>?g bYn,'6lšf|[f}uk.Irv̔笖Z&)\0OaI 'X}YR%$Β]|#T"A`+Zad>9F47E@F_c`vZ,HwT;ƶ>>yB7_C<fa{jk45_/cP2w٦ౙ{ъ4o?a7{ ;cL7ǒ+k<Lq:aƠ0Pza@|gn"s&3[ůl(rf+g?~MG|8:q1Y 0Je?.,pOr;⋦ɏ[f)RRfr Y>tҒC A"kRP'@4fTdGH' JD|`qDp1l^ӅCYkn GT_܄Φ~R jQnOڎYv۬/N|]l->g°UV0&Rp`4XF_FʕN66j/6GۥȢPy~駖2g[-Cw΅7uffPMIQr$%ɟ~{}Y1$v"Pfjҫx %u9sM[Gi(-& h_奘 <FQ$$8ODF$)8@3+]+1C YN:XUDaA%7'T=eʊ=c8?70BqLkqBxNpdc.*$J&\a!S*(t9 1TFQZw]8s=M'잊FO/ ibQb#AIR PyșݹRbbaL8##M@`D0f8gƔJJY%РH̕B2B4@2B51T%F<ܰ[ f|~بՂsZ4"ڸ31jg!$f!B@c0U%J Bh&#%Ct ,T1T0H;7ooW6/bCEiA~u6X?l,H|oT~%-^tѧŵ Թ2C6tB#o6MVT Hh9 XQN'N۔Zd yTY+z H<ɂ3J[`:g% 'zo5RG?aXR"]0ɪ>̎|)? ,; /fXсy̢h `VWuҖ^yYCߧD%}FMO&YF|wo b[V/@).0Kc`=Z evw0[y*t ->Oee-n47yv{b٩Qd}>:Wm+]׻# Bm}u2Ko@z;VΨ^]!B7*睇6*jA0`zRmoÐEU!5!3'dCZ{6K 5%(4ju2ےyYj )r:"UcWmܡi]l'c:]]M1*}f,B㉁=p[ MU!}%/խO2.z4)ܙJ}̠Kۦ?#yaF I6uA&ZK yb#u+кbBgԱny_ pP8޵uEsnm0SԋSCc'VCZ7O ZmYi)nS}zպNcqn-=r,8e UnJ.Ȳ,Jn0☒d)0ar"|}BDA O M"p & I+C @IuV &.A?g+7oePqۖK-Xʐvor섴3B!E;!5QKzPbQ z9wM(\Q$ǐH#30d2Q=cL# D"("ZI50L`"5ID TZe<v;|;,/|lG&iRٕmϝZ:ޙ@PNԻ"p4>"x"|}Y>UMTXE\OilDK^E&DZU㺘B!gp,X0^7frU3WՅ@rV]*٪bІa`K=ߓZҩrilXl V&Kvci*5`{,Ŏ R (Xmi*5GKOіU8F|hK*ܠXsbɱT^R{]jlqep8Kݪp˴ 7Qr7RpbII7"iR(XKS9_z,u+b,E|,=&,5x6K)uc)-`1e9!KO͞ұS/}ByYjI4?mBR}&oozk.%9YzByYR. ݦXcniY CNF<ŀ!ΤcHZv^Axrs {*UVwZ2{(EBӈ * mh/X 6@g]}+jSRGbw/{`0i_IX&O& ǏzHYku4_:K}rΣXiϡ!9A18q΅: #RU:*p񓅤fQA?4W烙iNV9SN;3N[Q^((* ,fЇwdjjBv:5 ObiffLd\P/dh!tѳPCmPHh#0/t.,@p}$E*!c*2=QB-! 0*DƠ*Nu$%n2II TvS瀾vO 9 $o(??{ oˆIDL #a))0%BvE R[+h!.V6 p~?ǮUy^~E߱ _U: 7zn?+I$YAgjh#)8FD&N6}p#)5&234Z~jMrf69;l?z^$*0D2IƇ:š&GÍhd2]`Θ<Z)C:'s\Py I1IOBWP](k}6ᤨx.`IN*X8ep|[[LMBŖ*^hmaXj_(.l%$i s6a >Brzpp ѭH$[9"KRK1g&d?ۇ%YfjcnʸAj"_ȼ}ymV{GK}srkޮyʚyo:4N4\a@ET̈́iTaĄD%Kh,9Ymk>*^n7.J#1n ?k +fˉ.%vG6cU@cIEH"e$1a" i~O4 %'Jp 0,B )qۛ D Rح#Hb"X'@Dѷ@%bđQ-FqIN=鋦$'^og0RMzeh+{FV(3:Plƙ|>hf\/!1C9~~S e_tѧŵ=y}gH#K[y 3al laF$ U:+&"14D Ll8c!ƖH)PNw:}u ̧7$40T@)9oT5w*mc4B3J yB,PT$Ut(b'8* oB̹@'ZB 'Ocp 9A[4?7X>}uN̷/G(G.b]PR>xzU#<VP,̀IRc̒윜Qh! .$$c4+X`L3q) RiU(3LhoFc>-!I9KRxn*vlw7HUu#V>V݅mVZ1l(o}CZ|-x3a.r5"9cƺ:v|n0m:0WVvvw]1pCdsaND2w-A Dp,2b3s@[چvбQ̻cF9.<& <՛nCZ2 w^exLI(Gqpu/{3,=Y~?⳿P?ˍG[ȧICr[VmvZ>4NOg?m1)F?H{rhJ@¡qҰyPzbYRz{ݠLIKgP?&|Oj9j8=,eP3ȡH 6޺aqlKs`Cہ51 zԔ/ג10H;NԌr@y'̇?8<ʦ{5bCK͑`RRKX2Әpŀ{4&Y^C,66$uAWwqvf:- u~k3 vT<6?ycEy7ǚLr۠ {suӷKtU5:EvB ٕ}?[H֬HăKvvK̸>PyE ̶z[b ȷo8DcnPupތiak^h`  ^X=βu`/.}F6,ע7bѺN{`8 c!{ u:QC| cxQ7Pʶ-f;mv60[5'˳WWE*+㴅*vºYst_GX s@2F;Y9Ax!S_n}ܽ#LɘdVq+Q>P#t4~eps1i_׺!䘉dYa*Gs"_lhsʻdžWj1ڊ uרDt}tabxKw T,B9 =_?*z] n#vFhF)b΃ S趝0Bnu'aFҷ\I흷p` eB@&ܰa j6q޼, א88:x>V4`qh))!F?F[7^ZкbBgԱn]F][7(֭ pzq{{͌8'⋁S 3j8IPX)ZqQ%#-G5vsÁh7xbl'I=/K-!S?ARnkGcRRsf)XjFb ,=.,N,=i"RDEըRRl$KOޕ5q#Lpd`2w_!>p+M[=:$B=J1q^T~RLֵVGCR~Fch|/|˵>_?l+f=F]Z)u]P}U@$rGWZbK)=\9Xj&ւT^.]3&RJ_i$AXap'-\y;wj`'HTUF}`(?8Js0ԝ7~ wO#+ q㕻zY=ynzR_|o(BvN˲|X)9MXCICe=QG!'훰%,Jʤd&R)w< ROb{CKKkKRzh[ۡ7w_p5N"Y됥JFԮk']qu|wst4ҧ*,YD. %a \3ܬ d jq).Zq)ԬQ=hR4HYV4/&[Oے5S\ @lW"(b該6SQk5և*GՅd,=S Vάg)- p r:PJ,lˆhqf'RՕeŠ aBWJvx&*!+J.[jin nn $ mؒm tm3B!BcᵠU QDkNW:81%QνVPLRSZ(N3DyK ؠsRV\-GAPrnEHD&+ѐۣ6|Gk ̦q _aMG5՗:۝sGWQТ9+ N>i(w9!ԁP->zW`MHR\{I\yL@KIbfOYa)ty8!2,%ӓʩcC}Ǒ-!q1q1A^2>\v>9W=eA !Qr;m aj=6Bz[8} gb]?\|0k5{PޜJ]%js^ 0qC5moV^@fc:1K%u{'`=kZdcM)k;)X;|r6YY\_^c4oܳks4ɵي? n L2f@\Q£CBcn= D7ȍ|֊"QGsεIjRΐ2Ф*NvKZvё/: s1HI6v78DN-W'1WhMqww trƻp8ѻ%z>,7,bg+ c>xL L'WHϻe_Hn=X7nm R5iż OXHRd,7aKQ|5޼1QVLۻ}t W,?퐗[pAG@[#4&՜mJ%Z/5v#ViH]ӲN4-)]GlJa;v]ɡ/&g 23HL̦\',/΄ilqSȃ"-oNWZco ?_0jvkλZm?-g445q&0ˍЌ >L}ۢCf7^BRe @. R͹cXzcHѬ)oY+Wbc$a! ѷWNxYm~i;v8 C 4 $@BtGzz`{R8)J8i.x`6hb }HoN/>)6B\u 0ԖĘ!tM@A3ͩ7X!x[^16%P¸A*oL[ PiSK 6an^A{"S]gPZY/\ 3x0SB_[T3x3LSB`f0} /D%Z uH*s--d WB9Y <X>Q:87/ $U^GMsvs$kFKZ}S.r9Y,c\*#mԟKe$(2ZM(-sadﴑ@h Pu|:>fdT""[kaN.y~i(uOim5JB F%BL1¹+NBi R%(_XkŮ뗯8]pK\6*4ulJ>Z)LQ=^r|s'7ۇu0~No޼LNH)՛Yxi^'Ss{X0QqC$_T7 Q66]y_7piIޞ>F j-d/QܡX**x^X›RRryc?\NgO3cPB$.bІ4 -Ȫ1$ o\9Z Fm XorޟBΛ(:J6hID~-+&4x:v'`-fQTB;TEŔri xF$X{>d^cqtD~Nf8E$&!˜[Ac8z4O|:y{ j(㸑'qp> {\Ucu+N8QG6(Dr׋S!dL1m pNsJW #IxTFp`EYt\TzID;$ .q3eK{)m;{F 90+J-ePmXpbqV^n! Ki`=04zayw. }߼_]WZDwc~R5lPO$uM6@Gi)^ xY^{ڒb@T*ȶLb`kd<\dz[ Ic}? Lr&FέcpeƩ.@8Zj){ǧJXH1>5%Lkvٶ .mW.cFCc-ЬdUľN@INt$Za2U9>ܦz#Ň hfAqjHwD3(?4]y 6KOQûP1aB뫛8S2($N\uУ^[o 28z=1x$z߅6I4gi#4/r@Bc9~Bc.P(tzOH&f;Nb.N\ކ;=qww37~{_Nz|§5wi}ܼa'xE.V`'d߮7h/qg/;0%UbS!ѝ بhxurFT0mw+[/k vzWj A;lӬ AI8O崱|^RR$uUv+ӧ )6TFnJMd9|h#~k^IpEF@u)^( .J`Q;]hCbea"9++V5-P%ØÈ5J!a^1e-Pe_FM衕:Ev=*vV 9&D9$+Jh1*MQɖVԂZ*w@Z%"%*B !N$pI`H RW '+XR DVJ7x+&>S5F}Yn7V!W6n?J1eBR*LJ"S4YxYr'JJ$4CE‚F%ɷ uz|fj{#_[r,YU|WĆOzZh={ȞԚ76Ҭ v]zH%*FJ݄|Ƿ &Mf::%ADJҢF2`ɢ[ آ;- vw;/!F CRB͢W[R|TtG B:mԈ|0*@K^xfr=4+_1g>.řPr&?/׈k 6%O#&fA-B|%n4jؾ׆Ð.VZ#-PHni>"BKdҝRNbPʡomv=ZޖSUjum9ZNNics |Y^2fd]<lv<6zt+8+tSsx%(;9at0xZuЉ|CG"k2]8gځO_ٷı{@{% })]2lJ#oˢe/ݥ;:bVavFtCjIԚ)GupRAOz#}bnQXo=s>6Kw_C ,Bt?G㳸wqpSD>$/f-?Mpcwg$7/oG>z?M߶iXذy2IRHvS =D[\wLUb}5wK:ψn#m6nuLB9DS0]qn{7CRuN3ۄ4ƍݒOq[y)Y9Cs8w?' y ,I' ,*o*[+|ikɪL7a|TU&(_֩n|#a$ZH$8LUSM"gFghmM`~ZlܕS.[Sj' 9z̪KېO̚ ʳꜤef]q?DjxUȹ2țE֓c2b#7E1%kX`.'ik̈́0E bmCX.ߧv.\?iF xyKQ6ђl &L'tL>IDԫs`3gs(*%sPE]VX{mժ@c]Be.lhSe^+] P~^=Jի7ˊk^N):4wp`*/)32Y R[]URfJM"rR3u%6HFVj6X pA`rQQf*ˬ,*[1.NS]*kd(jKZsFvjTZG ש#&R,T弦UmpiY-pFGB.36*o+-NQ#j,2͇RΩ|*o0a6Q-BLQFr 1   Nq0CU޳A\2svwzlnĜSPSs1n4plCoih;-):p4F> ;6T1`%g`V;$զ915bJ.Ux3;VbXCZWNz刑t`hk,FץSTǻ3G5ks"k8?Ւ4"]vH&Ib]<]r#ƻM}nLB9D03mFf[8I}Fwrҏ0[y)E p[z>pǹN;y!XUjK})QTx̵ҿUF)x%),P CJJF4 V6:R;'5J)]j(%fo6#F)^JTYӁ9jSj"uZ:nJT68BYT"VS^z(C)Rv(=$զV ǍRbF|j8JTF)R,P+$\KmHnjRp]@)KK})SNQ*-20]Zkp( >EF)&5QmJ)QʫZKO&ǍRR}S/4z)S=4D|z2:6nAhj<EI?mVթLIbӢf$ A r^^ۻVAE;lf 1 '݇`J#;S\X *oޙ3NНsy dbZ@,pTi!cznPVP-~pLͣV~4H)q(Z|&!iu b 92F\t>`jc(S2R@.k2)KҹPeVQZfX|pK{^OV)*p*"NU2GRuYT9 Y蕒A!R-QTr_Tf2E h1~.[.~+(m9J}.\xUy\g7OZVups6٠q݆&l›0vQ+SkHI#eK9P2W5U0DZF1'햱Sji3M~o(LQʑxQyitU>Apf ꪬ]](]NNT|2JѢmם<헰8?ӊs}|X :R?;][dO_T#>ZUwwU_ %~<\/ߝyr~m&λ~'D( ^!Q{VMk/u~z)eu?tJ2˳Z?=|~k*q~zq]vYO?oVﮛZ9i6& 9a͜Gݯ/^aߗgkԝ)}/>O1!b|$gq 6Μ5HԠ֣7䂝?v`-g[e_NQ!=GujT;Ѷ ,5g޶Wպ!;} ٟK+(kY!iGn1YI+Zm-rѭ)o`j==޷+ٽHA@|X{6R]!]j_u물Vu巟C9?c&,I!()Y!κUd2?R k- 3*_\' ?]3bWrzRki6g1d]mJ :H0aoTp>#ti)J;Zo)nyMm,[8I}Fw.6$Cz:Dw!IB}ƹ\y-U>#ƻM9xqXkh~@;ء}9 _V+H+>M<|Y}.¶QdQԗهo݇Šk/o.q- NJ@yU \3*o|3/헰0jOߪP*|, ]}py٧/>H0K8aUY^g7 h?W^ʻ~\<-LP2UCVcY?6{ 7ͷ]=*`Zzw^ S~e0MBl F H%k+oK4:Sa>_d_ܧo˵^m Z =/vN>~#Z0>A NnC$Fi0hCaf"^6=cʥJ~Ghu)e<>GFaQz 5B̛ ӖQ)f$KPJzၽjs*C0 fi 2ރGn"jЀt]e=x BkPdG΍4`gF"-=H8r.qg@\XKO~)&>5'&)[*m|Xm^y-M mm%*/ `]K)t<m! 4E/Z9µd/ptdmCD.ǢmN˂rYY%и02׺1/2С.rl,#rRgTVjSR(R9Iwl5 Pi[&ϋʹ(ma\Yddjufօ3 [jL5ʥVS9ujq*P߀SF!,S4Z沆,,Q%>Li uJ0Uݞ+RQ SHJ:bwȩ`gJkM!KL% E#!8gT[sriA#ƻM86f4~x@;KnG/DvH!vHvCZ&q]C:+NdydyQJ&ՊD'"ma\՛'kp 0$EB mQSI͹]?Q;oVioLoZ ]9[vSx}b77a(xUu4];/7/*|maqH5};Α*oqdi(EVt*ȩBFT>ECQgʹ@ZVЇ (VWZgyE*tKChNAp}/TrºLKQԥvTBJcȳRɬt'!e]< 6ޕ5q#ٝp%Ee G̬x_Vu2WcHlφEv7XPbUˬD9 ){[rqO&\'IMUbg+ceB`O~˽\WF`!@Hử;"7iأEWABJQCUzC);MTBI CX{C#EL2X=gK]w%v-vZŰ?|a-Ŀ\uk47W;)]}n>%eXIe/chbE Da=~  $[O@Lsœ30ilT2I13^Hώ鵑+7p(71o/&ObdY(k$ VQ ƜL0aP<)ڨP{M PdӔAJWGC5X v'K,XTT"IT(т*$ϙR(0Ȩ&>QHBt[-*iEoϷ)Bc-WߑX؄]~WhXbxor#\J,füV[WmX" UYyG9jksE,s?en}u/F<\mx,jܾwm?W3:XSS0XpгK$ ;#$0VwZ_ov'cv}^Xkdd!e~1jPqx 2` jL dD0 C'*-D*!Lu9A)@C(&3g(\Ӝh m4L"!uY0iLeEj(5VS̄"mvq ]<ָ\|Qq*bY^3=1+媚U=#cBz<[|0TFW% @=V6%!GY^HIXlؠ%)E"Ry7OxُWr'7mI$0\9C:>HEq:;WT 9<B9tUza4GXV~_VVtU1|N'DŽ /Hؗk즐2jIdE:ㅶbDa6BVEw9@Lm5Un)s]Ps :JL E$exsb"ePaXf0;\w[)Q7gv,P~!1SGk4 S2HqiB H5Mi9%B HrB Q n'/Yh ۮv5xN[,"tBqd}X/||V30CTbtrEL`eH&3B(;T"bGLYfy9fŎe rj r~ւwP`AʮCE aZ`d@$<+@g)S9F`b@4"W8eX!5(CO*ʰƑ8O^ؽ?d%:'zR< P_)A?<BE `!?&=}~Q)@B}v\o]U%ONO3ﯬ3Y,W}|x,9X| ˓wb?Bҫ<]OGngW[HI*]z?5a¯&|r?V ~q|WCMwS 1gۦGM5U-kBʍfLZ"P7oKtwP6&Fgq"CSDu-YNY֫RY5%^fNϞsgm\Kحw"8E]mFt CN{*O/K) :7o-onONuaRVE FURΞ ™*V:*:oA]61J^W#Ɠz~.+WQ_ xm_7r"W'u](pKut8qHPQiM~@ a0bpyNt(dpsb$J@0$Tg9, 9Hͤ4K)NՄ@nEUVmd-ͯ1g+ÝTa-S X)9hmTAL2P&xJf8E %R,Fn?7Ƞ?d!ѩͭCI@6F^‡!ŌuccrXe*hw_,l#fzj{E&*k. ⧰ijCt!⧗G#}ziާ:ڢ}h f_[_ݻZ `tw}&^][=ʦzp,%r2+Wg<Aj1GJ;~yps/n4&B;mPuϣ0Za\j@zJg+Rs+7o[bAXj (w]Z=eA|Ο ̛ɶĤ6vsM^ŻCrpQM~?΋5%ǫ}[{c(ӷŽ^1r5t|h-#51 QBETi/Sv.gm=kSrS'T>hB^FٔG9io׻qb11gnÅ2N^;t-i7лa!Dl+kĹMzX BL'uېͻųz6,䕛hŦIXsT CU4a(ںrMOMʃ,@7Djt9!p%̑ZH@Ec4_c +&Z>!,eqH;8A\F9:MsCtaHS) 2DaL;*1 9EZ#8TFˆؑM!'Sa>JY$Qؚ^X~&P6[!WcSj-0e*M)Q=`jŦw  =E:(½ʌ>ܴ_1u]a4c tTMG -aF\`ߍش}T@\[;@-m}=iWbPG\b/7/ػ?-vT]Cc gz޵@ >bM.!nɧϐ&8`&adMPV.:7vj;ePC|͢QC$5oV}s9C@[uUjd}' UD硛K$4Պj,5汾;RK ,gԩ0Ϋ:# 6,䕛hM)|3ɎwS0X BL'upс); /лa!DlGM2>ާH K"QiUCOO-X+76TYrYw;-v\!wM1r*?_t/'t{Ms\몌yDmz~Z׻1'!?+m_5^=g#0Wgpuŷx\#].gn\@Cأi(Tk cT CU ަJZzR }KzxCMPI{ա璗j&MG-MiI-JewAF!+w_3ͼ(h&y;ݏ:Vs|NGE%y"N뎊1oPSi%n4:_؅ΰIH0\2:M\B(fdO *CP=ĪCñah#[0u^a8{Eqf))U櫳u۳׈Ofv+If-@v؟^޸ƽ7˻C'/鴬_v?:CyKvsdYn%qYMɈWM73>V$4|I*jrg)|j|;R&Xlqzwzec_Vo`"J5p&Wšw6{u6KWb]$}'%|wpP𤋮ﮞ*cyIe 1e MRHqbT4Xj'=aj]Ml5sՇj*S"s80L) o=RЩ)g6E]3`^@RsOE29[F:Us;n2K?]P[Ca?#0JfCbXW{~҆nkzWc []mq7&ml=2B@ 2.>ǬGtrJ{yxr<ϼjdmDϺSZ'5Ev=Lp`)%D 8SPI䓑ăe g L2^%2 /r1$0/F6G_ >k`s1A MN?k# T=`ƈŘvW^ܛu\|# G@ʘVHcTߙW *6B):Nqat\0W_[1gq7cUԆk/ŎZvOmTK$2K}-5*8+e!#擰RF4H %q[)\yv&a 5#ZKJe$Dx/>/Ŷ!=n+(rVW4&Jk$4&j-5"=eHJRtB2 +*J+dScRQ8ABxC13Çg-ey{~1?W|F#Y˪Hjq{_'r@7?Kw d% ۖ}2Vlk)+R 9WCP7~[z9PVmaȓB!Od,@ew  L-q ^_͕?,WԢϒo7])˨z6wj,Xs\9SKoy) +$ѿ f`O Su{_R$N)ۼW.]o$kɟՆح6N+N UO8y$zmW7mOs"yNuȟֆZWnGJ28LpCCRAP!Wh#N^clj[nSBɀe@j%rJJ6H, ~1L6ѽ@r5BcQн oI*z~DQKF9SP~E͹С1kPDRʅU ~ٿ*ȥzUJspS-?nKL)&$U(ԈBqrMʰI\?! `%z+v"Qhӕ{ۧ„~1}~zKg~=WK,?7%_Ћ_"R\9/Wֻ2Bs>_.w˵B~z*(~{s3 _<]^`@w2%?g5?__w_`M"RIy zO}xT}D@Μ)BrUP[V B!gL\^S [OdȕO:q(u5:!|\sQ Rn\|< [,˻ۼ򂋆IɋF$z <$\;~la~z ޛhg>٬]xș$A>Rrh0szy@K4}d37ޟR8KX}[W5 ׸-zOg?_^~0Voӥ\'EK>iαYߓvr57bC'ߖu"Syȅιz#(i^-mVYNz0#%\E|5TJEEgE)VU$8'.ZZ:@Ym;eJ˭%V:39h+ SV MYtc-~JUd4K8& ws{)C6, )u`6\~r]`S9VWj[(#a$-Z9B`JB)8TFZ,h)UI؁#aL!ԒZA|Ns3RLR@_l#9qM DK 3KTu !r@b)- (Bv?w( DX*mZE5 @nѨ!争*-zʌP=׷F*Z4D"m纠<\@ZJy7 `%N]jQ*zj/rKO=J9TG"A D}jutߢ :Pӣ`*tVSɹN3=2ZՏ?`?խt F|ڤǏAS!#l1R^4Zw1D5j ̭-8p+ ZA +bQR M eZěAj|hTfB) Rth!_ wrBIȝy ,Q (3ߒA,K:"m IFU$*fǚ6W9_%]~wm5Q =8R%u.$%53V +hnPEOj:f#0\0EFHsescl}W\(E3X"*9ڲ4Ey;G5;},z6qJGp=%-m!x#ג(LXPs`;>rܿN(MJ}(RSd^*ŁʭHFF4DìGap;!ûVKq8DFeX0|8GgcS4rɠ:#iMOZ8,k~*IRF̈7P¶L_gmhړàF+sEgU 1;ȩ ̽ ̍,윖Ɖ_sN1GRQ*'l2\=TUTڍ|XV4eߢ ی^ּL%6ퟓ'm=#THsEi5oG{?dzMҊh$ @&vIB`F( c>\?v-Hx݃)aCc{hcZ7h yn়_.µ(5PbSGSBˡfo. CiѠKohLØMFe$#LcUg*EA@H~5]k)yv]~CHTRsИR2 P7DiU̫,)F c<@7 e,ΔyN\hqu|rR+!:d / /#.BUؖZZaݨMT0PvMTZN H7QI[7$|+wɍR11onE_^[}HօqM) 8w&5[*16m]D̻s7Dz.,䍛6%zOqża_0(<˨P#vHXDmݣ+r-CinAV SLy[(* YBṞD%2&c ʎr:cJ ͈*T4scc%)Kp. B؋ 0Z;dy,l~Gs )%J LBjLh!ePO-#aگZFW2TL$H$Phy?RM-Of] Gy8jJL=z OZQˬmwKDPCJ.GHJr_ye#dLK+z9庁>?$inRx>rv.~6w3kz6x`)`@q/>  ӵ2\$DN@F%gzɑ_;Td6j`=Ӎ}A,kK> ɽ]>ddn2l++31$#`;2n^&MՔ`0 XW8W^Bf̠trȧTX }JuW88ejUpgqAֲK07. +܊h0,J:NW:00D( )A!t4,nT:nvbuoxwA%m_rݚ@;hmL1.;JduY(7쭎"+=7@Stϯ١n٢ý6w?~qe=h+;l<>(}?5sـez:%pQ'$kև2a3hiFO\s>_~Z)Yp1'鷛%ʵwZ{i^ M8dt"qfN7Hm4 Dbq|8LIur|·d_ @%3:R55֠hcG`Uhn~*fs|p$fSZB27+c 9Uє &9FWw5hZfv2_ ʙ).[ -n7m!jqlټxoyaq:}z* ^P骘nSYA68׋;yC?KJvG-ָs>ٛ@v8b'>8ݧeo9!"I3,偳xO Vl74y* *~Ѩ5ҠtlDK I&(P2HFDPcFf&1S Z3"ZRnKmc8rqgP 03ex(Ci!$]Pz(qJ PzLQG`s&uoF1o JyҸ&#!FTz`473ƫsq6r4̭cPT2EN1}|#D_G`qf8s̮-|o6Nz"3]ײRa@>w}_˔t_~Ζ#XL<5} bY** %QY+fl뙥A1Ͳ5cjR&:ہnL!1 $Β M&it h=idHjh%N?οUBcWis{]$[ʢfm# Tr2ݘpGð>=;"ݒbWT?&Ge5W;*7mq@~:FF>:aF95> R,l>Ϳsoer>R>|>HM45zMpiJ86Nqˡk2ʫ|1[ HjUZ<|[sܝ%7~z7EJ{^$O6g&PKf86ͦdKfxflN&6_4#i 2u IöFmSX%G+ԙb(ͥG b3>]9f)5 4smZK9W'3"h3nˋa2T$ݢ:Om\c!Q#\[Qe*AXǤ:éQeBqow"} SdHc5-^sv@MK z;Np߷iH=K'J+1TEAo?S1GW1[jÍ46Z[Wm4>FcǚhT0a(Q}|sKG m^(qͷcf+lfZt U;\޹PU@M0OM b 7m4L&J8:RV!@[lD.fp/yw1P Asd Irr$`DB-f0 i\5EH1UƉVx,%&)|* {r5h$fOkhP\Ĥ@uIJmdq XXi.&aørIMpz8 ~(_AJ  JbǬA0PzLmHtAYyj(e+~!Dr(9RrO"ٕJ yԏ LQ1.aR?^B^2 $ PK7JO_AzB JL'Qh'QǥݑZ0qATx&R {S=^[,fi?_^LYzz>;zP~ Xp]w뿿b!VFgY7޻XqVh?w:_,Wy6qϿ)]+0b?G7@ݬZQW^,rRGG&!J\`"J*lǕz~xr:"(kXy`Iv-^_Z*w;lŹ3nCiwS] 1jdZΣ5~^\5 ]JWU燻Lϖw:WVǿaSTw=b\|x&P`0@~'Ԓ\;'J*nvKs҃ (忦]aң/6F3ڑz[6OǏ>DǍ8,y 1Tt%5\2xn&1Dۼ=\@._FD\Sw1ܼ Q'/"TKn2>b1[V,^b1|1×"1 2֩D;mo*έ_FD,ʉ\ cm }܊t?#1ńƙ4; B#C#9WT $`iM !P:C7h*. Tan)Bө*U/V /bev&4bS"f9kd Ljx}ް7pJ02xs%5ôүEUQ>!FH߆kɸ.X!o?Di0yӇ %cuz ?G+{yh~eCr0 ikd/[~-nB.Em_O~ɖS !C4 S >Hn 1~#ĻxtC=[y!z7ʆEkaFPߎzuEWn CtS~DbM* XRK$/IgH=3iQ,*ԳZ0z[ ;FA0]~{Qb|/Fb aoP mYD#^a$Me/'Q&^uhC5 Et0&y-Ћ1 `F!lJ߮Ka":p=䘵Ď%/#v,Ϡ'L6X&  c6f0YB"pAU*}pظHY1K8{j,y?ŗ|-ˆ`䤹:~@j8nޣ s6`Q "R {SJU{t.'B.N_ R!nҙI bJfzXjxn XT'asqw7q !(#F 8"F)ljKw#bYpze$ a Q&5  '.o>IS9I%S;jj$EPc*H #R)v%'ؾ_,4¾S0bA H. r%Żjj$ їSaJހSkFSjvBnr(-Kt"a-KzX*a˒%y)F p=c|ްkuលTȟ\"SGNo7Jhޒbx[n 聘p;koW-gY\uawmIt8|)K!%'RҐ"yli8zڴ`-gVuЇDoޘl[Rԕ,HkϪ{6"iF 1Yр6JeBhCaR)D =ʼnG7!Q b|`R7% k8]~h(s%#395L{[QXeL|]nVR}CI~0T2T}qlA&[cpƑ1eGqMQY:pD-Q)y'X10ދĿ^ITQ!;Ɓdtwm@KmH(G~ ! a!8F* X*!%Os2(1>+kUDYJtH*^y1ZR,J%yIXʪ+xj$ g `"@-Wx|0ڪUU)ٌ ST9yȳ4ItV\K79 tY‘$So0n* PPc5wSG?[P-9iPF9D=g8x'~9r`&ӿ+?J>fNcO0~~̗ȋ5^}VRfF*BͅuyVsE^nQ KWXz;1 Wę,ŸO!͒ijd IKAvUh yӭ:1[ s&\ /Jm ; 5^\0P^%ʹ)"R]D Yhlm9e#й` %?PMNku-H1@Zh2C]Uj5)ĻB`B0 F3 )ƥTQ3,St_ (l3b')bj1qԋi3BZA3JL. iRL-Ռ#DQ$b譡K}-%H'rE70uz bOߓ J3SV$4YE "/)ANH.'H7:8\Zj*=q Bf/qK9C P3} S`պ !FvCxX곍X?=Ģ]Bs>Tjbz* sR KTA;ua?6V]MgWĨ_\)լoʱ V=ބ.l^7 z&t5eֺ&t!܄ntV-F+MjYsM@C418iKUW FA=z)$g02٦yIZ1MukG #yݲ aScvOȨgCXLy F_j,fyBeFgd22&霑eI S(`XAXKY_఻n"LvZY׬\>He*G83Zr1LG33r˙_QX I;e)2B5j0Rﮗj@'WJ3d,hx M\j:X +fM+m5Wy nP9̋=EUw!4e 03YY#LxzQ1:jXf@n>5H.#^kSݵִ9&p$fH`(U+C_rTpQRcTm"p4QY+@gJ4zg k8;ޭ[BRͿVV*U~CO^7W5Yj}mPJPjg Kg \ Hϝ{k@pzm3aգuuÏxJ4KBͤ;j.4UZ.Vk Xe۪ KI-+:϶@F8!7ˉ]\]N7Twb:eÙ:_te]U; Ӈh*֭`ޯܲ?~@obYm7e>^Ϣթ%ڭy7 . 9[ [Ws{zz.n*\ZB hp+cF*jW>yHxI僵!uu돝%[ {; d>,j5{#*,o@ nڽ੭ɱ=_e_Ԋthe7v؆k)a" BPhtnCRmKMLJP*[QgRkIpBQLJWiU6 R0a(-* LJR1rl/eq[zw?XL篗%9@CaҠ4$"MJZP&iq&iqprp, Yh֒z(T*@XM Yq^혅s}3Yr_-ZF"̃aa}kq4CoY$`3lfY;CŞ+>u:lK+MVvs[$X\?ixq5wfE{2!:$ lC#bBg0sŝb!>mUعC^]s!"QI(ۼ0˷p ga,&3Vc 3?^ 4=84u!xВCG8.CXJЊgnCz7Yi4Uc4O@qL ]-v6MumUɴe `@`й%i5 3)fr@/^m&IjXe3F qYhea)qN6N)KD-VBJ,G#K'u1ZU ܢ"/2]NAޟW[?f̩i"4/lr;y廨+s%>%S;1 =cgYOrQn(X',Tr,7iX nGƦ@-R{oƑtHGRq GNG9 zzR28A̔X1@i%K&]@H$ @h 7_oejDqIެ[~?'7srcCZHW'PCZ~woz+g L+,GVݢ(K/+cBMV,'fɧsby6۲pgn(\"P3p~KSndy׵:>3L/T>tvr_w~"!ۖZ8Pz( rϕ(P*TJj'sBQ4,L+oJwRky:7Jyy'5 "Ci) GRްG"!Zy7ټ{|fzZ˩巍'Fp=вR(?kctlmgIvêi7">Q bY n}Fis´ؘlpYa=bAPnB)Q)VRQK'gf\\OWW1WFDXCLXd ZE^Sgsw?U?z2~$I2X"F)Ȭkܦtw?O-;;< pY7r̶o6Bqhd=-)v&4ڲ>IV#jlJu5N[$X]6'fe*)s0cå~jx.x1FyTc8"*ItĜf=#l%E]H)KU|ȫmVeHDۋ-P ]g(E \`AZ8ĥ. Pn{=IjҜ*],N'lu_^Ny,p,(]v'@؛nN~ǶxFƪ (N4v-#N?WwÆ>GpzOw{Cd9~v_{j5m_uozxg?3;gzr9g!'An=Bw /^vS?'U`\sX~ ˗kHO\~5NM\ͽ#:>u {2n2?m;zc{9piZMKۼwq(!l';:vxZ2|>1pҧQR40wvmR$凎zhB^ IsPla`c;ӵ ʲ<ݓem=) h|Zrsmb jiUߚ)<|)BHCDƥK 4||gvi%2zZ"hs8!uIlIB>zţ`)B7YxoƫM"ڎ]D6+6,S6Z$a5EHAo۠ӝpZ>(ifU/~?] 7s!wR}崪崪崪崫z_Yr G0Y|:x %!&_QƢ Bzo?"g,q-] Яg˴yiuIu󢠋4MКonm;R~?;=X`>=~IuƅKVGT;%IJVeD$12DM.6frFبQ9{1()~{I @@gPIiUqā0Qjh]iMiQ5=[=}~J,"&/W깭w.g_o]\U;uuCKø`]ϼv|%$örޜ(_l9f_.oBǓf@ޠZlG(a.]XٌƪWQiG ^K-<ʿbAI| cJLmJ"3MϽ8x Ye/çlDždcHdPL`NБ%s-7 pp}l>jF"x*oGt%Q^6;\>}pN(I%=zKU;g`\UJş=ۿ{A 89r㧶T㧖6~u/- P"t# =Q@u~xGCzyUϰ[_C9Uǥt.ܴPtݾ9brb~+@!_9sJ X4,dXFsVOVb??zbҎy&y~ D^w/lَ74ԑov7xg` N~۠t.M9n5K!OU*iWkS*WRJ )ↀ< t OZ! r K2=w"47A ~0KUo*v F B+>J9@7ֳTfd3)(nk//r̎~t yqJ2%=z@{^>O\w1V9@DApm5=Nʎ敻G%;E{o(!>QE=%0@i0@[iF+Vd:Ʈxi[xiaGK/[8@>RzPx#T]̞<ߏa`qƫ8*sx1H!.f_͗>rWDZ0i 2H%2Z9N$-:轷 :ժ&y} :T%dx^ok->jcgSufUVXsj>h_LxULvչ( LjFYn jH[cBK<@ 'j8ʮz76:6B $ ֦d ֖5dQ!Kmp(=蘆nz?WE)| _7ɯmφUi _NjUM$'GϾn(0Ιmao[;CXp*"/peOyTж.v|>DB/8n쁫zu>Хs^r{\|ົzHj9zAŞxb`QNqNpC=ATo9z\; 1n޶vo[{导>7QB>|kKT=:˼N% Yь4 6\SHhJԎ}݆iqxF]1Yp.1%'|!>rupY@VjޱӼdw3[Kh"?ϦAVZRP[ ^:=`5d_-14In`_lN znUJb[VXY1"l?e(PwֻWtT,N8żm{}=gYN7W\^QIg\+ǯS53PQ9Gt[b31_\^_xTҖ%K^!MyM!oTw5?r0 `(2.$" yN!{PlZ(ږ "kr!*A'yz{(0< %v:yi,&뤭[B$x Z&RIރol䐜=sR It6J`d2:y(o M7o[AJ^)32{k# HDSJ*L)D(t`3&Ń*>vJw ~5 ٱgX2h-B -TՄg0zQ8ͮ"֌MBC 61 1Ȃq_MC_'6^xIؑEv޶2[8r/ ;Ð Itycء\IYjׂP,%M]c`Eմ#H.`~ػNiq{3Nr8(>oKzKhsPs%!_K'hL4J[\En"-H_:y\z͐_$#tϮ_^zpŞwjeSR\+V uQ|o(Z'b -MBd:Zf$ޕ$BeL~tӠD$:܃IR"妨zlH*yD/<7B;7:7(U{CK?Nox$EN&XR($*s ^mVɬI. ̔ D!(W6e8l;ggz l˵b1**u1#=|<drI©'xkY?ʲ"BH8(9U0Oޑ{ȮؔtN۽b_s(1в9ȜIɧ\;wywb͌9=yýE)((%֒ *͓RY l74!$ :ad[aAX{7v>[lpȚzQ!eshF#$Aa)S[R2ﭽμf7< nB.Zb*޳1-LlBah*xmB{FtST-َuTS<XphH2lGuev::órqlۛC8u*8 z¼ւ9}q)ۧ9P>?JFuE>_,c+N%Һ & d-iͥXg'U^>땧z1l~1'yqS pD!1laa!"S^f#l}?N>r{ϥ3'tG͛ugBIMV~r1cQw(%ԇ0_ִW^,5! Jk;['(`A A,ϯ/CuNwӖbn͂5N]%8&JҿH qD)#Ts8+KP3~Hd.eo4!S’*BYXzJȬFmgխe- ᎅ$7\-8iȝAː 88Ih⮔ŠMs:N`Iw p ,j]K#2v\3͗lm2m4Bt*0gsb%zēh殴E> uAHg1_>gJPheϮ`6;egɢoa\zr¥U@Ȱ˙3GXĘi"{Qwd+bA0MySVsRbfdp -#WuvG.ᔆPb:%ΡL30ӯy@kAnZ}~/NzF+3d©KݐhFpg>Z4z pYE>i\'ɖM$?Tj:B;,ik| ,lv0?(d)#g9^"Ԗ } z3u};. wiwXSV?ڥp 6}8z.VC/\;qhVXtM3LyʳiR3h:LFך8m ]k 1b=l{y;6x[Jn)!C('\Ț̖13+g} V,z;6j3%VFsۿ۰穏7wBN}wjHqFռc ǔxrwԂ3տ[pf~8 hFS7h1'أX¸˓[<%mXҥ?>~rBJk}<b~xpq~F/T&7Merq:`Zz zaA(25r‘I%mt9 VX;xni:%^$=Z`twXt=h)_A^|Iſ9/O_-ޮ^e[,ZBфoaGq.y˧?ǻ ::]WD#d)q9Y%เ*eHnAZQ` ]*t*G|7H=B &}zYG];Z 8"#jz3)مzC Zkv5]h q3J+e<վ՛%VnEU,O?,lç1nvo{.$%qGcEBzD⅔,s"IQ*bq#Q [r v} o.C-hwd{̶X4(zxQe˰XS,:SU^X뻧T2dTuFo߭nN#=nTڊJ˄ /D4%FwΨdkAǴ9eLѾxo=]>-#n.*׷&2K~E2P0mCO}wV˛R[/@ٞ{#hjqH9X7.d` A792ι1s͒ )SY<. 9NrxA0P3¦C~DiliA`l] g j T%iH& }׌jY3V%*'FRvWSsřug]Q&ž[%ǷP:]ʊp&*VP sNGη^BCM3Yl@\_6cIT}vbSEObcۃtyzUiȂ(?yciQr|#|z!AbӼy:|$4 Nb:#"RmcX@?0☾_h0==]q@S]5c(N .VdBSU\ۧz [3@h[CI嗢uz[YJZ ޢޟ^}i0A2_6u-XP,Ym|"*kkXr@ZdG)ʄ-/ yd=E9 =gMTS1L@cJ{F_itI/L2//X-I濿[lS&e˱ es֭sU\+@:fzC%Z\_*xoAGRTr$ afbDUo]g2GQ>Z_(ʓ(y`[\rP)*H of k >$Gb2(` 7 `g 4. N{,TրihNVzo>2?S~~ Ї JSA6`#&x k-FkL(gǸѺrn3&kGQy~89ZMN=[~%V?4$.iBE/ԯka7a V67^Jpϯ$ K Ejjoeny{ # vncw|v$*IV>FѨ;  ݢ5 aQ5/q]* aN !PrAy ovJ\7t iVQ"˜7V0? F;9>ړK&~[-zNvFkfNS.~1 n>xR}~~7lWv-kYewg:0yzX_yW> &P0YC4BvO&Ǹ|w_}寮+f}od.`q`zKShݻo7}f}qom3݃g[Sc? x[M-;*JV/VjxC|1|Xb˨- MiF_ۛn4wι8j  M2z_Z~ 9POA^3Heh>UR@+ ~. tu@H]k9x{tʯW]Q<:7Ӹ(? X>\~m3W-+BUT**U_ozx&r{*IWe+7e=Y7N6U[j;9zۻ;֕&m.SIͻu4лa!OD;T-M*ί[WNgnE@ۻuO]лa!ODlS̿ʲYTklN<6[j6gyh} h[tFGWFa~xp!L%l ٸXokE3'XwN:irn9H( )d8wmA 952#ߠߛ"!{3`X9ApYf]CL~wj'|j@r;wQid_RE1xb. Oi vwO_ftyE-D膛u1ZEt^- qSЎ$0j |ɇԺѮ[Df a܄] |@27V_.Y]%NnkAB2 vĐaijg;{\Ζ9 pSCk&Lb[FRߕLǂHrSn. *d1vlxo }A|zi8UgE+(qQSEwJ~u7ʆ!8bE" Zd\((rr$1~72D k x.]3!s_ RɥkQ^΋g6D}ʧ-αȖtm:C~!_ L.'t9+gF k3r)1'iG[w Xfbc4%:v<OjQVEH N. 2X@B[&Τ?bN]3N÷|(_ڂq'4B:j0DASpH[FvH /vs2 #/ ĜBn]޽_;. ^;^~{kNL0)7ވE!5 $0R97 = ]enI$[ݯE0`O>Nt"T2BjI*&,voi -f~&+Kɐ3Rl^H7\u%jLtd!pTN84ȐQʹ{ա*Qb¹H-1eKzä<wq`a;U|AnjA^ ȭ!\Œȉi;"tLPy&bTGLN[qa[Dž`1E- YO6G,brYe(apy >zE{y*9#J.#+FbFb!tHX<DHEE/[hO#'a"T>ȉuhU}I#FK r;,jY-}JYX0cuHTʳڃ IXN!o5=r<Y02:򎓾 ?N"Xu^Ch#yhil nJ ֐ 6$ORԹGiC-<t"C%G!8Z|I%B]Z+ӈwF4W쩠:Vb!x:#Wnds1դ~~k>.ŧ9͒䉇GQd,oí xK=՚۠5'äK-sJ>/aDb͊*WelVHN2j_gpH<- 1B""$t3w3=Q9w`.7Gϵ. PoiA&$t&' ̑cO@nޅd:[V J2)XOǐa'ҷ(Hr-g)aK+jb'2{b|u.hV|1ohM> Vp[>CI@%% 7K17\>~mxV!eT(sPyPX$J(#PDioIZ";+ N7J^A]V ^: $#u)b}ΰ5@>+LSUY17V"'V%bsJ2FtL`D D іE#?o/57nj]l0ڨcf~fgWͭ bgU`*%6kd 8*K'㭾 BX^*ld-JwvWnK-*P Q1ߪuaT 0]=Jhmc9fs aY!aKѠN)Ўjz:l BFK@K$.h \z*='HAP@$-ƊZ@IܖYb& SBv!p%tn! :bjA>[@f}91MUڨzB ^K's*I62SRUҟb.76?% 5JJ gSr`s7)wMQp=??Y01-ٙv`P֓ǴIjٰr4}_|EUÿu6uCT}0mϯ|u4XLWsqK^oKOjbW( Z| L$s@L2$ eLGN9H-YϧzwsbboFf_~{ [+0at=JygV ; nrxxX^ϵy_cU\%10$CP`'yA3>餜T~S\@9 Y)֋%تQ,)%G5y/O2.irFM%o9E?`=JB\ZzP)ε߂З4)mrF}{gq }Gn+2zOX@@cC-btWW$\;* KIMd兝/= R)si͌GbO^=! w^(zp֋͟ѿ$9{^n؉jw{;˰S Z$$T*,`|ߺ`ej:tus/nՄ .1OtʑYl@&ysa,_w_h,4IJI+3vo߻=?v@z( Ol{ԥnXnisOrātJM5w+8iZ{lXc:nXR^`ޞyC&ݛxZo_>. )x\$fRmGuUw] ϣAue$Jy\f imsEMUfzs9 7my;.x{ޛi;68O_ϜF0Ⱥz4{^K<]"~ۜL}B/(_;/Q+wiNQƘ .e`LIG^@s !oNrU!ST/512UQHVXgyqA4:ρI) p_ۦ ЧNUV{Mp!?C?D״L^W S%iJȡ*:e2L׹* U$zvP|ُU}_Y[/[~ٟ E*~jYçcly:L=.pD]8qڎU^K~ 2.m,K *74r̐@^gl"mor÷8'"6SYlY$Ȓy||apձMjMC:]ī#}G(^L]ջ}q R^Iowk[;/nWX+ s_\ٟ޵o|}LY]ݾnl<Pn 1>Sw΀˞Ȏ;}*Y-k$3#1+\Ն 0W\%7UK'B*4QPy& ,3cPDZ$ӣG7'ޯ:y4Q^!Y>jݬN;5W֝7> RjrruP_ _ .h\/o.?}V>.:K O_?<=郘&ȕ&xw1fDŽ>g?83ξa9~~l&ԛl ˷Ͼ}3JR3ÓԎьLٹ'h@.%?>gm~v'kϣZ( mr{9??t 驪@`;IT8$Tܰ @.scf;Eeg:RĴ 𙩀T^AՁlj'jM͂e_ +>NkNźTG/r9cJiO a{vjGlClHxJV 'qϩ)ݭg܎a",gs50ϙtZs;5)(b"bz0Oy 9r&qJɏэ֋R! wDgm|f B3#h<8עԍR1N;bۘ,F>rmq*0iL 3aλτ @%jN,=D|w%i]V_&ӈ,57\,E2` oھWXz, [=:D3!'Ұj LXz,R.ӓ`./V~bAT0J.ק{fTa,m֨O,=l* c&4 XXMXz,,l|α X}ѶK܄W.<\=y=ᷕWh9p'݄%\c\<"dܲwS$&lpdG%x$ E:cF+lO SeЗj?_V!*ٹT";70hI2[ñFO|#=ֈ[6Q[E]99D-01PiSTdF=-F/|J2$Jڣ dhGHiLp|nȰ UF.yI/sZΑ(1Kɺ&ϲh"KM5)U͟ ^MJE tU* 9 W , !GN$Nɉ1эw:fٌD j$#lp>]c"q6\ɳlв=0)ĩ됾g Y1jy*8O_Weoft!GNΜR@a[ϻ@?cDY|µ"OaB6RNMiE:ڋ3ny{)Bh:8"sf* 3E@iY!S 2!y<+uQ Z,F.[mue ^l  1\S9) R+RqҙR<+3RWDQ+VVR(UmjEƘYhf"up0f9稕9vuKiQXfbZF . !SYn/Va#"+;‡p*؊kgw2]7'|؜-,8{n7?nkgɃra|6?x?_пt_:/>W(Ϛ~cUXM|ېgOdҺfy `q)~kGw v~ <έQH-k7p9AFﴆH 5v<ݭVFpQ8*|ծa/ -F%K^CRvNNl'/Vnk(AT;bDƋΧE`ȑS4S p#ZtIH<&b(f_ΧJ '񘜆*CIBs=)DT B6*[}C)]4;gٳn؆.ݐ'lv xI6'C] ӘƴФq]4LAi5*~Y>8? nܷvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000005576477515134632172017733 0ustar rootrootJan 23 06:33:19 crc systemd[1]: Starting Kubernetes Kubelet... Jan 23 06:33:19 crc restorecon[4698]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:19 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:33:20 crc restorecon[4698]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 06:33:20 crc restorecon[4698]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 23 06:33:20 crc kubenswrapper[4937]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 06:33:20 crc kubenswrapper[4937]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 23 06:33:20 crc kubenswrapper[4937]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 06:33:20 crc kubenswrapper[4937]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 06:33:20 crc kubenswrapper[4937]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 23 06:33:20 crc kubenswrapper[4937]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.336018 4937 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345110 4937 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345171 4937 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345181 4937 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345195 4937 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345211 4937 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345224 4937 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345234 4937 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345245 4937 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345254 4937 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345264 4937 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345274 4937 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345282 4937 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345292 4937 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345301 4937 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345311 4937 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345320 4937 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345328 4937 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345340 4937 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345351 4937 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345361 4937 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345370 4937 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345379 4937 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345388 4937 feature_gate.go:330] unrecognized feature gate: Example Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345397 4937 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345405 4937 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345415 4937 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345423 4937 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345432 4937 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345441 4937 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345449 4937 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345458 4937 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345467 4937 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345488 4937 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345496 4937 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345506 4937 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345516 4937 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345525 4937 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345536 4937 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345544 4937 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345553 4937 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345561 4937 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345570 4937 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345578 4937 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345586 4937 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345628 4937 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345636 4937 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345645 4937 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345653 4937 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345662 4937 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345671 4937 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345713 4937 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345722 4937 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345730 4937 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345739 4937 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345747 4937 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345756 4937 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345764 4937 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345772 4937 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345781 4937 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345794 4937 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345804 4937 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345814 4937 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345823 4937 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345833 4937 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345846 4937 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345857 4937 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345866 4937 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345876 4937 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345885 4937 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345897 4937 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.345906 4937 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346103 4937 flags.go:64] FLAG: --address="0.0.0.0" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346127 4937 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346143 4937 flags.go:64] FLAG: --anonymous-auth="true" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346156 4937 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346173 4937 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346185 4937 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346209 4937 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346222 4937 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346233 4937 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346243 4937 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346255 4937 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346266 4937 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346276 4937 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346287 4937 flags.go:64] FLAG: --cgroup-root="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346297 4937 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346307 4937 flags.go:64] FLAG: --client-ca-file="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346317 4937 flags.go:64] FLAG: --cloud-config="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346327 4937 flags.go:64] FLAG: --cloud-provider="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346337 4937 flags.go:64] FLAG: --cluster-dns="[]" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346351 4937 flags.go:64] FLAG: --cluster-domain="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346360 4937 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346371 4937 flags.go:64] FLAG: --config-dir="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346382 4937 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346394 4937 flags.go:64] FLAG: --container-log-max-files="5" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346408 4937 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346418 4937 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346429 4937 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346440 4937 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346450 4937 flags.go:64] FLAG: --contention-profiling="false" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346460 4937 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346470 4937 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346482 4937 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346493 4937 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346506 4937 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346516 4937 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346580 4937 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346614 4937 flags.go:64] FLAG: --enable-load-reader="false" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346626 4937 flags.go:64] FLAG: --enable-server="true" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346638 4937 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346651 4937 flags.go:64] FLAG: --event-burst="100" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346661 4937 flags.go:64] FLAG: --event-qps="50" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346671 4937 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346681 4937 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346692 4937 flags.go:64] FLAG: --eviction-hard="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346705 4937 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346715 4937 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346725 4937 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346739 4937 flags.go:64] FLAG: --eviction-soft="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346749 4937 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346759 4937 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346769 4937 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346779 4937 flags.go:64] FLAG: --experimental-mounter-path="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346789 4937 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346799 4937 flags.go:64] FLAG: --fail-swap-on="true" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346809 4937 flags.go:64] FLAG: --feature-gates="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346828 4937 flags.go:64] FLAG: --file-check-frequency="20s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346839 4937 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346850 4937 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346860 4937 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346871 4937 flags.go:64] FLAG: --healthz-port="10248" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346882 4937 flags.go:64] FLAG: --help="false" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346892 4937 flags.go:64] FLAG: --hostname-override="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346903 4937 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346913 4937 flags.go:64] FLAG: --http-check-frequency="20s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346923 4937 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346934 4937 flags.go:64] FLAG: --image-credential-provider-config="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346943 4937 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346953 4937 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346963 4937 flags.go:64] FLAG: --image-service-endpoint="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346973 4937 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346983 4937 flags.go:64] FLAG: --kube-api-burst="100" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.346993 4937 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347003 4937 flags.go:64] FLAG: --kube-api-qps="50" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347014 4937 flags.go:64] FLAG: --kube-reserved="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347024 4937 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347034 4937 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347044 4937 flags.go:64] FLAG: --kubelet-cgroups="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347054 4937 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347063 4937 flags.go:64] FLAG: --lock-file="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347074 4937 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347084 4937 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347094 4937 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347109 4937 flags.go:64] FLAG: --log-json-split-stream="false" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347121 4937 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347131 4937 flags.go:64] FLAG: --log-text-split-stream="false" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347141 4937 flags.go:64] FLAG: --logging-format="text" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347151 4937 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347164 4937 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347173 4937 flags.go:64] FLAG: --manifest-url="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347183 4937 flags.go:64] FLAG: --manifest-url-header="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347197 4937 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347207 4937 flags.go:64] FLAG: --max-open-files="1000000" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347220 4937 flags.go:64] FLAG: --max-pods="110" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347230 4937 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347250 4937 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347262 4937 flags.go:64] FLAG: --memory-manager-policy="None" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347272 4937 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347283 4937 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347293 4937 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347304 4937 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347328 4937 flags.go:64] FLAG: --node-status-max-images="50" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347339 4937 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347349 4937 flags.go:64] FLAG: --oom-score-adj="-999" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347359 4937 flags.go:64] FLAG: --pod-cidr="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347368 4937 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347383 4937 flags.go:64] FLAG: --pod-manifest-path="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347392 4937 flags.go:64] FLAG: --pod-max-pids="-1" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347402 4937 flags.go:64] FLAG: --pods-per-core="0" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347412 4937 flags.go:64] FLAG: --port="10250" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347422 4937 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347433 4937 flags.go:64] FLAG: --provider-id="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347442 4937 flags.go:64] FLAG: --qos-reserved="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347469 4937 flags.go:64] FLAG: --read-only-port="10255" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347480 4937 flags.go:64] FLAG: --register-node="true" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347491 4937 flags.go:64] FLAG: --register-schedulable="true" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347501 4937 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347518 4937 flags.go:64] FLAG: --registry-burst="10" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347527 4937 flags.go:64] FLAG: --registry-qps="5" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347538 4937 flags.go:64] FLAG: --reserved-cpus="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347552 4937 flags.go:64] FLAG: --reserved-memory="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347567 4937 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347576 4937 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347586 4937 flags.go:64] FLAG: --rotate-certificates="false" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347624 4937 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347634 4937 flags.go:64] FLAG: --runonce="false" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347644 4937 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347654 4937 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347666 4937 flags.go:64] FLAG: --seccomp-default="false" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347676 4937 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347686 4937 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347696 4937 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347707 4937 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347717 4937 flags.go:64] FLAG: --storage-driver-password="root" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347727 4937 flags.go:64] FLAG: --storage-driver-secure="false" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347737 4937 flags.go:64] FLAG: --storage-driver-table="stats" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347747 4937 flags.go:64] FLAG: --storage-driver-user="root" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347757 4937 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347768 4937 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347778 4937 flags.go:64] FLAG: --system-cgroups="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347787 4937 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347804 4937 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347814 4937 flags.go:64] FLAG: --tls-cert-file="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347824 4937 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347837 4937 flags.go:64] FLAG: --tls-min-version="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347847 4937 flags.go:64] FLAG: --tls-private-key-file="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347857 4937 flags.go:64] FLAG: --topology-manager-policy="none" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347867 4937 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347877 4937 flags.go:64] FLAG: --topology-manager-scope="container" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347887 4937 flags.go:64] FLAG: --v="2" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347900 4937 flags.go:64] FLAG: --version="false" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347913 4937 flags.go:64] FLAG: --vmodule="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347926 4937 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.347938 4937 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348199 4937 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348210 4937 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348221 4937 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348230 4937 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348240 4937 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348248 4937 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348258 4937 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348266 4937 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348275 4937 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348284 4937 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348292 4937 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348304 4937 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348315 4937 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348326 4937 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348338 4937 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348347 4937 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348356 4937 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348365 4937 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348373 4937 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348381 4937 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348391 4937 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348399 4937 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348408 4937 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348416 4937 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348425 4937 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348433 4937 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348442 4937 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348451 4937 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348459 4937 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348467 4937 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348477 4937 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348485 4937 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348493 4937 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348502 4937 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348510 4937 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348519 4937 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348527 4937 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348536 4937 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348546 4937 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348554 4937 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348563 4937 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348572 4937 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348580 4937 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348615 4937 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348624 4937 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348633 4937 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348642 4937 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348651 4937 feature_gate.go:330] unrecognized feature gate: Example Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348659 4937 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348671 4937 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348681 4937 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348692 4937 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348703 4937 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348712 4937 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348722 4937 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348732 4937 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348741 4937 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348752 4937 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348764 4937 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348773 4937 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348783 4937 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348793 4937 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348801 4937 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348810 4937 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348818 4937 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348827 4937 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348835 4937 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348844 4937 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348852 4937 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348861 4937 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.348870 4937 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.348920 4937 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.359291 4937 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.359345 4937 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359479 4937 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359503 4937 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359512 4937 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359523 4937 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359532 4937 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359541 4937 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359548 4937 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359556 4937 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359564 4937 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359575 4937 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359587 4937 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359621 4937 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359632 4937 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359643 4937 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359652 4937 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359660 4937 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359668 4937 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359676 4937 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359684 4937 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359692 4937 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359701 4937 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359709 4937 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359717 4937 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359725 4937 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359733 4937 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359740 4937 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359749 4937 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359756 4937 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359764 4937 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359771 4937 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359779 4937 feature_gate.go:330] unrecognized feature gate: Example Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359787 4937 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359795 4937 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359803 4937 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359813 4937 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359821 4937 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359829 4937 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359839 4937 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359848 4937 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359857 4937 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359867 4937 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359876 4937 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359885 4937 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359893 4937 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359901 4937 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359909 4937 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359917 4937 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359925 4937 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359933 4937 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359941 4937 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359949 4937 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359957 4937 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359965 4937 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359973 4937 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359981 4937 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.359989 4937 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360000 4937 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360009 4937 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360018 4937 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360027 4937 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360036 4937 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360044 4937 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360055 4937 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360068 4937 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360083 4937 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360095 4937 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360106 4937 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360116 4937 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360126 4937 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360136 4937 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360151 4937 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.360166 4937 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360410 4937 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360425 4937 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360434 4937 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360443 4937 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360453 4937 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360462 4937 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360470 4937 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360479 4937 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360487 4937 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360495 4937 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360504 4937 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360512 4937 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360520 4937 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360528 4937 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360536 4937 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360544 4937 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360552 4937 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360560 4937 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360568 4937 feature_gate.go:330] unrecognized feature gate: Example Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360575 4937 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360583 4937 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360620 4937 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360628 4937 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360637 4937 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360645 4937 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360654 4937 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360661 4937 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360669 4937 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360677 4937 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360684 4937 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360692 4937 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360700 4937 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360708 4937 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360715 4937 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360724 4937 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360733 4937 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360741 4937 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360749 4937 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360758 4937 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360767 4937 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360776 4937 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360784 4937 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360792 4937 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360800 4937 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360808 4937 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360816 4937 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360824 4937 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360834 4937 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360843 4937 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360856 4937 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360868 4937 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360905 4937 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360919 4937 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360932 4937 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360944 4937 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360958 4937 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360968 4937 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360977 4937 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360987 4937 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.360998 4937 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.361008 4937 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.361019 4937 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.361033 4937 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.361046 4937 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.361058 4937 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.361069 4937 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.361079 4937 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.361089 4937 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.361099 4937 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.361109 4937 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.361122 4937 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.361136 4937 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.361574 4937 server.go:940] "Client rotation is on, will bootstrap in background" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.366470 4937 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.366650 4937 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.367634 4937 server.go:997] "Starting client certificate rotation" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.367683 4937 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.367944 4937 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-11 23:23:16.129452415 +0000 UTC Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.368116 4937 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.375873 4937 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 06:33:20 crc kubenswrapper[4937]: E0123 06:33:20.378557 4937 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.380459 4937 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.391582 4937 log.go:25] "Validated CRI v1 runtime API" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.416933 4937 log.go:25] "Validated CRI v1 image API" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.419409 4937 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.423411 4937 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-23-06-28-23-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.423445 4937 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.436440 4937 manager.go:217] Machine: {Timestamp:2026-01-23 06:33:20.435196511 +0000 UTC m=+0.238963184 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c6f09717-13e3-4c26-b541-e217196b2ab6 BootID:a1babb72-84ba-4ca7-966b-7f641e51838d Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e3:36:e9 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e3:36:e9 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:c1:6c:ca Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:8a:63:48 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d4:c2:9e Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c0:8d:89 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:8e:c0:f9:ef:24:38 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:92:cb:30:1a:e7:b1 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.436706 4937 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.436914 4937 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.437702 4937 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.437882 4937 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.437926 4937 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.438184 4937 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.438199 4937 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.438384 4937 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.438425 4937 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.438662 4937 state_mem.go:36] "Initialized new in-memory state store" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.438760 4937 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.439787 4937 kubelet.go:418] "Attempting to sync node with API server" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.439814 4937 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.439844 4937 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.439861 4937 kubelet.go:324] "Adding apiserver pod source" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.439876 4937 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.441901 4937 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.442319 4937 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.442549 4937 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.442691 4937 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 23 06:33:20 crc kubenswrapper[4937]: E0123 06:33:20.442775 4937 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.442849 4937 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 06:33:20 crc kubenswrapper[4937]: E0123 06:33:20.442820 4937 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.443407 4937 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.443431 4937 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.443441 4937 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.443450 4937 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.443465 4937 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.443476 4937 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.443486 4937 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.443500 4937 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.443510 4937 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.443518 4937 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.443549 4937 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.443557 4937 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.443773 4937 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.444192 4937 server.go:1280] "Started kubelet" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.444688 4937 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.444680 4937 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.445564 4937 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 06:33:20 crc systemd[1]: Started Kubernetes Kubelet. Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.447433 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.447512 4937 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.447948 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 21:44:21.048580514 +0000 UTC Jan 23 06:33:20 crc kubenswrapper[4937]: E0123 06:33:20.448278 4937 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.448470 4937 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.448638 4937 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.448532 4937 server.go:460] "Adding debug handlers to kubelet server" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.448686 4937 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.449331 4937 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 23 06:33:20 crc kubenswrapper[4937]: E0123 06:33:20.449429 4937 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.458797 4937 factory.go:55] Registering systemd factory Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.458848 4937 factory.go:221] Registration of the systemd container factory successfully Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.459014 4937 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.460331 4937 factory.go:153] Registering CRI-O factory Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.460419 4937 factory.go:221] Registration of the crio container factory successfully Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.460562 4937 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.460700 4937 factory.go:103] Registering Raw factory Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.460740 4937 manager.go:1196] Started watching for new ooms in manager Jan 23 06:33:20 crc kubenswrapper[4937]: E0123 06:33:20.460717 4937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="200ms" Jan 23 06:33:20 crc kubenswrapper[4937]: E0123 06:33:20.461033 4937 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.64:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d488d52db46b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 06:33:20.444163768 +0000 UTC m=+0.247930421,LastTimestamp:2026-01-23 06:33:20.444163768 +0000 UTC m=+0.247930421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.466251 4937 manager.go:319] Starting recovery of all containers Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.468900 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.468965 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.468982 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.468996 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.469033 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.469085 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.469094 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.469103 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.469116 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.469125 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.469135 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.469144 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.469155 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.469167 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473393 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473409 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473421 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473431 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473442 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473454 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473465 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473476 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473493 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473503 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473512 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473520 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473533 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473543 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473579 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473607 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473620 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473629 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473638 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473651 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473663 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473676 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473689 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473702 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473714 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473726 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473765 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473778 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473791 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473804 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473821 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473836 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473848 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473860 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473873 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473885 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473898 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473910 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473931 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473943 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473958 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473969 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473982 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.473995 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474006 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474017 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474030 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474048 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474060 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474072 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474083 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474095 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474107 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474119 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474132 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474143 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474157 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474172 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474189 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474203 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474216 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474231 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474247 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474262 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474277 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474291 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474305 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474321 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474334 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474350 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474362 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474380 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474393 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474414 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474426 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474518 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474534 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474548 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474564 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474581 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474614 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474628 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474641 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474652 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474667 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474679 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474691 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474704 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474717 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474729 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474750 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474766 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474780 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474797 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474816 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474829 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474842 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474853 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474865 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474877 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474888 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474898 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474910 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474920 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474931 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474942 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474952 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474962 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474973 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474984 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.474998 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475012 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475023 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475060 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475071 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475081 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475091 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475101 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475112 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475122 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475133 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475142 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475151 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475161 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475209 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475221 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475252 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475261 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475272 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475282 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475313 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475324 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475334 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475344 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475377 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475405 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475417 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475429 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475440 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475468 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475481 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475524 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475539 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475552 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475581 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475607 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475617 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475630 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475640 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475669 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475679 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475690 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475700 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475712 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475722 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475735 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475746 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475771 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475783 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475794 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475804 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475832 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475845 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475855 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475865 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475893 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475904 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475916 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475928 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475938 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475965 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475975 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.475985 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.476010 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.476024 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.476037 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.476060 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.476073 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.476086 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.476096 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.476109 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.476136 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.476147 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.476159 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.476171 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.476183 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.476196 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.478508 4937 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.478582 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.478645 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.478672 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.478695 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.478720 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.478746 4937 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.478765 4937 reconstruct.go:97] "Volume reconstruction finished" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.478779 4937 reconciler.go:26] "Reconciler: start to sync state" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.510671 4937 manager.go:324] Recovery completed Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.518836 4937 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.524870 4937 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.524991 4937 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.525059 4937 kubelet.go:2335] "Starting kubelet main sync loop" Jan 23 06:33:20 crc kubenswrapper[4937]: E0123 06:33:20.525207 4937 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 06:33:20 crc kubenswrapper[4937]: W0123 06:33:20.529774 4937 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 23 06:33:20 crc kubenswrapper[4937]: E0123 06:33:20.529893 4937 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.531049 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.536627 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.536668 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.536677 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.538108 4937 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.538129 4937 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.538150 4937 state_mem.go:36] "Initialized new in-memory state store" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.548855 4937 policy_none.go:49] "None policy: Start" Jan 23 06:33:20 crc kubenswrapper[4937]: E0123 06:33:20.548989 4937 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.550022 4937 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.550048 4937 state_mem.go:35] "Initializing new in-memory state store" Jan 23 06:33:20 crc kubenswrapper[4937]: E0123 06:33:20.626024 4937 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.638837 4937 manager.go:334] "Starting Device Plugin manager" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.638918 4937 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.638935 4937 server.go:79] "Starting device plugin registration server" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.639522 4937 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.639564 4937 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.639738 4937 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.639906 4937 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.639923 4937 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 06:33:20 crc kubenswrapper[4937]: E0123 06:33:20.646631 4937 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 06:33:20 crc kubenswrapper[4937]: E0123 06:33:20.661887 4937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="400ms" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.740547 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.742099 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.742165 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.742175 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.742211 4937 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 06:33:20 crc kubenswrapper[4937]: E0123 06:33:20.742883 4937 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.64:6443: connect: connection refused" node="crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.826352 4937 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.826576 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.828340 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.828396 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.828417 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.828743 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.828954 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.829035 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.829955 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.830021 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.830036 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.830270 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.830343 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.830420 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.830468 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.830425 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.830547 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.831632 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.831666 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.831679 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.831786 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.831867 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.831950 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.831964 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.832017 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.831975 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.832937 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.832990 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.833009 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.833540 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.833629 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.833650 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.833941 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.834018 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.834051 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.835197 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.835266 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.835291 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.835711 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.835770 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.835789 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.835798 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.835897 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.837550 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.837573 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.837582 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.884770 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.884827 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.884872 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.884905 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.884942 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.884974 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.885007 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.885036 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.885071 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.885102 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.885134 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.885165 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.885195 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.885226 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.885256 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.943394 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.945062 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.945094 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.945103 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.945130 4937 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 06:33:20 crc kubenswrapper[4937]: E0123 06:33:20.945749 4937 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.64:6443: connect: connection refused" node="crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986073 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986143 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986182 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986219 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986258 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986292 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986329 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986362 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986392 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986391 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986444 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986512 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986521 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986552 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986424 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986681 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986745 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986780 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986770 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986786 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986822 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986797 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986851 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.986890 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.987048 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.987126 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.987162 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.987237 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.987161 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 06:33:20 crc kubenswrapper[4937]: I0123 06:33:20.987310 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 06:33:21 crc kubenswrapper[4937]: E0123 06:33:21.064024 4937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="800ms" Jan 23 06:33:21 crc kubenswrapper[4937]: I0123 06:33:21.173142 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 06:33:21 crc kubenswrapper[4937]: I0123 06:33:21.200735 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 06:33:21 crc kubenswrapper[4937]: I0123 06:33:21.216587 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:33:21 crc kubenswrapper[4937]: W0123 06:33:21.228652 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-ce2b16ccc40af1bac088f7b3b47dde83b64dc23d38402dda1b6abbf50f65d78a WatchSource:0}: Error finding container ce2b16ccc40af1bac088f7b3b47dde83b64dc23d38402dda1b6abbf50f65d78a: Status 404 returned error can't find the container with id ce2b16ccc40af1bac088f7b3b47dde83b64dc23d38402dda1b6abbf50f65d78a Jan 23 06:33:21 crc kubenswrapper[4937]: W0123 06:33:21.243574 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-38e26c06974dc037557ab3f309cc50bd19ef0d5573f8a84c6b220b8f64e4ac74 WatchSource:0}: Error finding container 38e26c06974dc037557ab3f309cc50bd19ef0d5573f8a84c6b220b8f64e4ac74: Status 404 returned error can't find the container with id 38e26c06974dc037557ab3f309cc50bd19ef0d5573f8a84c6b220b8f64e4ac74 Jan 23 06:33:21 crc kubenswrapper[4937]: I0123 06:33:21.265898 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:33:21 crc kubenswrapper[4937]: I0123 06:33:21.273028 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:33:21 crc kubenswrapper[4937]: W0123 06:33:21.286768 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ede5055bcbf76265c33adbc432fb71295e237c595d7c428d3560f15bd155c2e2 WatchSource:0}: Error finding container ede5055bcbf76265c33adbc432fb71295e237c595d7c428d3560f15bd155c2e2: Status 404 returned error can't find the container with id ede5055bcbf76265c33adbc432fb71295e237c595d7c428d3560f15bd155c2e2 Jan 23 06:33:21 crc kubenswrapper[4937]: W0123 06:33:21.288606 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-67aff9ce3887f0e1042fbbed0c15c9ca3ca2981b3e3c69cb78111ae9a177c60e WatchSource:0}: Error finding container 67aff9ce3887f0e1042fbbed0c15c9ca3ca2981b3e3c69cb78111ae9a177c60e: Status 404 returned error can't find the container with id 67aff9ce3887f0e1042fbbed0c15c9ca3ca2981b3e3c69cb78111ae9a177c60e Jan 23 06:33:21 crc kubenswrapper[4937]: I0123 06:33:21.346672 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:21 crc kubenswrapper[4937]: I0123 06:33:21.349580 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:21 crc kubenswrapper[4937]: I0123 06:33:21.349693 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:21 crc kubenswrapper[4937]: I0123 06:33:21.349715 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:21 crc kubenswrapper[4937]: I0123 06:33:21.349762 4937 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 06:33:21 crc kubenswrapper[4937]: E0123 06:33:21.350583 4937 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.64:6443: connect: connection refused" node="crc" Jan 23 06:33:21 crc kubenswrapper[4937]: W0123 06:33:21.405929 4937 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 23 06:33:21 crc kubenswrapper[4937]: E0123 06:33:21.406041 4937 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:33:21 crc kubenswrapper[4937]: I0123 06:33:21.448717 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 22:26:06.853251354 +0000 UTC Jan 23 06:33:21 crc kubenswrapper[4937]: I0123 06:33:21.460569 4937 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 23 06:33:21 crc kubenswrapper[4937]: I0123 06:33:21.540692 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"67aff9ce3887f0e1042fbbed0c15c9ca3ca2981b3e3c69cb78111ae9a177c60e"} Jan 23 06:33:21 crc kubenswrapper[4937]: I0123 06:33:21.542071 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ede5055bcbf76265c33adbc432fb71295e237c595d7c428d3560f15bd155c2e2"} Jan 23 06:33:21 crc kubenswrapper[4937]: I0123 06:33:21.543295 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"38e26c06974dc037557ab3f309cc50bd19ef0d5573f8a84c6b220b8f64e4ac74"} Jan 23 06:33:21 crc kubenswrapper[4937]: I0123 06:33:21.544849 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ce2b16ccc40af1bac088f7b3b47dde83b64dc23d38402dda1b6abbf50f65d78a"} Jan 23 06:33:21 crc kubenswrapper[4937]: I0123 06:33:21.546279 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"05f7602c39ffdad93d3927574d48c7daff4b11c9c71991c6251d30d4e9508be6"} Jan 23 06:33:21 crc kubenswrapper[4937]: W0123 06:33:21.633426 4937 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 23 06:33:21 crc kubenswrapper[4937]: E0123 06:33:21.633904 4937 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:33:21 crc kubenswrapper[4937]: W0123 06:33:21.688535 4937 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 23 06:33:21 crc kubenswrapper[4937]: E0123 06:33:21.688657 4937 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:33:21 crc kubenswrapper[4937]: W0123 06:33:21.800058 4937 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 23 06:33:21 crc kubenswrapper[4937]: E0123 06:33:21.800184 4937 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:33:21 crc kubenswrapper[4937]: E0123 06:33:21.866186 4937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="1.6s" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.151589 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.153770 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.153836 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.153857 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.153896 4937 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 06:33:22 crc kubenswrapper[4937]: E0123 06:33:22.154565 4937 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.64:6443: connect: connection refused" node="crc" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.449361 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 01:38:42.320800152 +0000 UTC Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.452521 4937 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 06:33:22 crc kubenswrapper[4937]: E0123 06:33:22.453787 4937 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.459975 4937 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.550669 4937 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74" exitCode=0 Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.550778 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74"} Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.550823 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.552368 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.552398 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.552413 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.562564 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.563151 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9"} Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.563191 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612"} Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.563201 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc"} Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.563211 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4"} Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.564178 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.564224 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.564239 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.570555 4937 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa" exitCode=0 Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.570629 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa"} Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.570674 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.571644 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.571670 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.571682 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.573061 4937 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b" exitCode=0 Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.573163 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b"} Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.573244 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.574617 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.574654 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.574665 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.575444 4937 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459" exitCode=0 Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.575490 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459"} Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.575520 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.578224 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.578467 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.578501 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.578520 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.579789 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.579883 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:22 crc kubenswrapper[4937]: I0123 06:33:22.579912 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:22 crc kubenswrapper[4937]: E0123 06:33:22.940950 4937 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.64:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d488d52db46b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 06:33:20.444163768 +0000 UTC m=+0.247930421,LastTimestamp:2026-01-23 06:33:20.444163768 +0000 UTC m=+0.247930421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 06:33:23 crc kubenswrapper[4937]: W0123 06:33:23.302133 4937 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 23 06:33:23 crc kubenswrapper[4937]: E0123 06:33:23.302230 4937 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.302642 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.307985 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:33:23 crc kubenswrapper[4937]: W0123 06:33:23.424139 4937 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 23 06:33:23 crc kubenswrapper[4937]: E0123 06:33:23.424269 4937 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.64:6443: connect: connection refused" logger="UnhandledError" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.449809 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:11:54.060408857 +0000 UTC Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.460561 4937 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.64:6443: connect: connection refused Jan 23 06:33:23 crc kubenswrapper[4937]: E0123 06:33:23.467179 4937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="3.2s" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.585971 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c"} Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.586040 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84"} Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.586060 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76"} Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.586073 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6"} Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.590446 4937 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec" exitCode=0 Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.590503 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec"} Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.590668 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.591827 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.591858 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.591873 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.597065 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f72971b4ea11d800db77e586767034c2d68959f1f0d41c81b01c68eb75b28230"} Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.597266 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.598673 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.598713 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.598731 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.623636 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.624130 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.624282 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a0e3c35a5dbb71b99af942ba049be90010dcbb07494e82e0a333751d53f02e13"} Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.624330 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f47fdb9ac3672226eb5c068cf4c2a6b257a5bdb05132033db65bf2cf237d63ce"} Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.624346 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"756d28ec84c8944eedde49e0eff6ae5e99072adc64dd3ec2be4d6f0f00e2ab25"} Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.630468 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.630512 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.630526 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.630723 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.630758 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.630772 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.754781 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.757092 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.757169 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.757189 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:23 crc kubenswrapper[4937]: I0123 06:33:23.757227 4937 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.450326 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 00:06:16.061506981 +0000 UTC Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.631929 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620"} Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.632113 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.633574 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.633670 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.633689 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.635111 4937 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e" exitCode=0 Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.635210 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e"} Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.635256 4937 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.635306 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.635422 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.635800 4937 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.635843 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.636372 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.636829 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.636864 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.636891 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.636905 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.636870 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.636939 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.636911 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.636993 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.637014 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.637292 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.637335 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:24 crc kubenswrapper[4937]: I0123 06:33:24.637357 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:25 crc kubenswrapper[4937]: I0123 06:33:25.451304 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:38:55.486424557 +0000 UTC Jan 23 06:33:25 crc kubenswrapper[4937]: I0123 06:33:25.647254 4937 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:33:25 crc kubenswrapper[4937]: I0123 06:33:25.647322 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:25 crc kubenswrapper[4937]: I0123 06:33:25.647896 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f"} Jan 23 06:33:25 crc kubenswrapper[4937]: I0123 06:33:25.647945 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c"} Jan 23 06:33:25 crc kubenswrapper[4937]: I0123 06:33:25.647961 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e"} Jan 23 06:33:25 crc kubenswrapper[4937]: I0123 06:33:25.648291 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:25 crc kubenswrapper[4937]: I0123 06:33:25.648323 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:25 crc kubenswrapper[4937]: I0123 06:33:25.648335 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:25 crc kubenswrapper[4937]: I0123 06:33:25.896619 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:33:26 crc kubenswrapper[4937]: I0123 06:33:26.452339 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:31:08.605087936 +0000 UTC Jan 23 06:33:26 crc kubenswrapper[4937]: I0123 06:33:26.659259 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab"} Jan 23 06:33:26 crc kubenswrapper[4937]: I0123 06:33:26.659312 4937 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:33:26 crc kubenswrapper[4937]: I0123 06:33:26.659347 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df"} Jan 23 06:33:26 crc kubenswrapper[4937]: I0123 06:33:26.659393 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:26 crc kubenswrapper[4937]: I0123 06:33:26.659408 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:26 crc kubenswrapper[4937]: I0123 06:33:26.662353 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:26 crc kubenswrapper[4937]: I0123 06:33:26.662405 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:26 crc kubenswrapper[4937]: I0123 06:33:26.662424 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:26 crc kubenswrapper[4937]: I0123 06:33:26.662442 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:26 crc kubenswrapper[4937]: I0123 06:33:26.662517 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:26 crc kubenswrapper[4937]: I0123 06:33:26.662541 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:26 crc kubenswrapper[4937]: I0123 06:33:26.663086 4937 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 06:33:26 crc kubenswrapper[4937]: I0123 06:33:26.939382 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:33:27 crc kubenswrapper[4937]: I0123 06:33:27.453082 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 10:55:24.869954612 +0000 UTC Jan 23 06:33:27 crc kubenswrapper[4937]: I0123 06:33:27.662278 4937 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:33:27 crc kubenswrapper[4937]: I0123 06:33:27.662370 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:27 crc kubenswrapper[4937]: I0123 06:33:27.662291 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:27 crc kubenswrapper[4937]: I0123 06:33:27.663646 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:27 crc kubenswrapper[4937]: I0123 06:33:27.663679 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:27 crc kubenswrapper[4937]: I0123 06:33:27.663691 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:27 crc kubenswrapper[4937]: I0123 06:33:27.663693 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:27 crc kubenswrapper[4937]: I0123 06:33:27.663727 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:27 crc kubenswrapper[4937]: I0123 06:33:27.663743 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:27 crc kubenswrapper[4937]: I0123 06:33:27.737995 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 23 06:33:28 crc kubenswrapper[4937]: I0123 06:33:28.454033 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:12:18.463906082 +0000 UTC Jan 23 06:33:28 crc kubenswrapper[4937]: I0123 06:33:28.665277 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:28 crc kubenswrapper[4937]: I0123 06:33:28.666348 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:28 crc kubenswrapper[4937]: I0123 06:33:28.666391 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:28 crc kubenswrapper[4937]: I0123 06:33:28.666402 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:29 crc kubenswrapper[4937]: I0123 06:33:29.089391 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:33:29 crc kubenswrapper[4937]: I0123 06:33:29.089722 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:29 crc kubenswrapper[4937]: I0123 06:33:29.091124 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:29 crc kubenswrapper[4937]: I0123 06:33:29.091161 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:29 crc kubenswrapper[4937]: I0123 06:33:29.091174 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:29 crc kubenswrapper[4937]: I0123 06:33:29.454902 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 23:37:57.938246865 +0000 UTC Jan 23 06:33:30 crc kubenswrapper[4937]: I0123 06:33:30.200539 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:33:30 crc kubenswrapper[4937]: I0123 06:33:30.200746 4937 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:33:30 crc kubenswrapper[4937]: I0123 06:33:30.200790 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:30 crc kubenswrapper[4937]: I0123 06:33:30.202029 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:30 crc kubenswrapper[4937]: I0123 06:33:30.202059 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:30 crc kubenswrapper[4937]: I0123 06:33:30.202074 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:30 crc kubenswrapper[4937]: I0123 06:33:30.455803 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 21:16:20.776792552 +0000 UTC Jan 23 06:33:30 crc kubenswrapper[4937]: E0123 06:33:30.647037 4937 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 06:33:30 crc kubenswrapper[4937]: I0123 06:33:30.730497 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:33:30 crc kubenswrapper[4937]: I0123 06:33:30.730800 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:30 crc kubenswrapper[4937]: I0123 06:33:30.732754 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:30 crc kubenswrapper[4937]: I0123 06:33:30.732822 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:30 crc kubenswrapper[4937]: I0123 06:33:30.732848 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:30 crc kubenswrapper[4937]: I0123 06:33:30.951813 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:33:30 crc kubenswrapper[4937]: I0123 06:33:30.952055 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:30 crc kubenswrapper[4937]: I0123 06:33:30.953763 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:30 crc kubenswrapper[4937]: I0123 06:33:30.953841 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:30 crc kubenswrapper[4937]: I0123 06:33:30.953858 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:31 crc kubenswrapper[4937]: I0123 06:33:31.183861 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 23 06:33:31 crc kubenswrapper[4937]: I0123 06:33:31.184212 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:31 crc kubenswrapper[4937]: I0123 06:33:31.186010 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:31 crc kubenswrapper[4937]: I0123 06:33:31.186281 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:31 crc kubenswrapper[4937]: I0123 06:33:31.186418 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:31 crc kubenswrapper[4937]: I0123 06:33:31.456502 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 08:52:06.89216443 +0000 UTC Jan 23 06:33:32 crc kubenswrapper[4937]: I0123 06:33:32.401508 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:33:32 crc kubenswrapper[4937]: I0123 06:33:32.401746 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:32 crc kubenswrapper[4937]: I0123 06:33:32.403084 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:32 crc kubenswrapper[4937]: I0123 06:33:32.403170 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:32 crc kubenswrapper[4937]: I0123 06:33:32.403192 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:32 crc kubenswrapper[4937]: I0123 06:33:32.408791 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:33:32 crc kubenswrapper[4937]: I0123 06:33:32.457016 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 08:31:12.669993669 +0000 UTC Jan 23 06:33:32 crc kubenswrapper[4937]: I0123 06:33:32.677936 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:32 crc kubenswrapper[4937]: I0123 06:33:32.679048 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:32 crc kubenswrapper[4937]: I0123 06:33:32.679128 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:32 crc kubenswrapper[4937]: I0123 06:33:32.679148 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:33 crc kubenswrapper[4937]: I0123 06:33:33.200792 4937 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 06:33:33 crc kubenswrapper[4937]: I0123 06:33:33.200929 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:33:33 crc kubenswrapper[4937]: I0123 06:33:33.458173 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 12:01:20.35665395 +0000 UTC Jan 23 06:33:33 crc kubenswrapper[4937]: E0123 06:33:33.758910 4937 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 23 06:33:33 crc kubenswrapper[4937]: W0123 06:33:33.937256 4937 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 23 06:33:33 crc kubenswrapper[4937]: I0123 06:33:33.937428 4937 trace.go:236] Trace[1841335999]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 06:33:23.934) (total time: 10002ms): Jan 23 06:33:33 crc kubenswrapper[4937]: Trace[1841335999]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (06:33:33.937) Jan 23 06:33:33 crc kubenswrapper[4937]: Trace[1841335999]: [10.002632667s] [10.002632667s] END Jan 23 06:33:33 crc kubenswrapper[4937]: E0123 06:33:33.937481 4937 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 06:33:34 crc kubenswrapper[4937]: W0123 06:33:34.076001 4937 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 23 06:33:34 crc kubenswrapper[4937]: I0123 06:33:34.076181 4937 trace.go:236] Trace[1405615133]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 06:33:24.074) (total time: 10001ms): Jan 23 06:33:34 crc kubenswrapper[4937]: Trace[1405615133]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:33:34.075) Jan 23 06:33:34 crc kubenswrapper[4937]: Trace[1405615133]: [10.001433993s] [10.001433993s] END Jan 23 06:33:34 crc kubenswrapper[4937]: E0123 06:33:34.076221 4937 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 06:33:34 crc kubenswrapper[4937]: I0123 06:33:34.458701 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 01:59:02.22431427 +0000 UTC Jan 23 06:33:34 crc kubenswrapper[4937]: I0123 06:33:34.461194 4937 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 23 06:33:35 crc kubenswrapper[4937]: I0123 06:33:35.209187 4937 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 06:33:35 crc kubenswrapper[4937]: I0123 06:33:35.209290 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 06:33:35 crc kubenswrapper[4937]: I0123 06:33:35.217947 4937 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 06:33:35 crc kubenswrapper[4937]: I0123 06:33:35.218065 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 06:33:35 crc kubenswrapper[4937]: I0123 06:33:35.459228 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 04:44:02.360827378 +0000 UTC Jan 23 06:33:36 crc kubenswrapper[4937]: I0123 06:33:36.460116 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:41:16.334861117 +0000 UTC Jan 23 06:33:36 crc kubenswrapper[4937]: I0123 06:33:36.948123 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:33:36 crc kubenswrapper[4937]: I0123 06:33:36.948481 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:36 crc kubenswrapper[4937]: I0123 06:33:36.950390 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:36 crc kubenswrapper[4937]: I0123 06:33:36.950476 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:36 crc kubenswrapper[4937]: I0123 06:33:36.950499 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:36 crc kubenswrapper[4937]: I0123 06:33:36.956887 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:33:36 crc kubenswrapper[4937]: I0123 06:33:36.959925 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:36 crc kubenswrapper[4937]: I0123 06:33:36.961541 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:36 crc kubenswrapper[4937]: I0123 06:33:36.961628 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:36 crc kubenswrapper[4937]: I0123 06:33:36.961650 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:36 crc kubenswrapper[4937]: I0123 06:33:36.961691 4937 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 06:33:36 crc kubenswrapper[4937]: E0123 06:33:36.967946 4937 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 23 06:33:37 crc kubenswrapper[4937]: I0123 06:33:37.460818 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 21:29:14.445110793 +0000 UTC Jan 23 06:33:37 crc kubenswrapper[4937]: I0123 06:33:37.694444 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:37 crc kubenswrapper[4937]: I0123 06:33:37.695824 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:37 crc kubenswrapper[4937]: I0123 06:33:37.695877 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:37 crc kubenswrapper[4937]: I0123 06:33:37.695902 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:37 crc kubenswrapper[4937]: I0123 06:33:37.770324 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 23 06:33:37 crc kubenswrapper[4937]: I0123 06:33:37.770686 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:37 crc kubenswrapper[4937]: I0123 06:33:37.772770 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:37 crc kubenswrapper[4937]: I0123 06:33:37.772854 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:37 crc kubenswrapper[4937]: I0123 06:33:37.772873 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:37 crc kubenswrapper[4937]: I0123 06:33:37.787362 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.131263 4937 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.450077 4937 apiserver.go:52] "Watching apiserver" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.455939 4937 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.456316 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.456978 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.457247 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.457266 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:33:38 crc kubenswrapper[4937]: E0123 06:33:38.457452 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.457775 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.457852 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.457914 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:33:38 crc kubenswrapper[4937]: E0123 06:33:38.457953 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:33:38 crc kubenswrapper[4937]: E0123 06:33:38.458020 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.460845 4937 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.460988 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 23:34:06.51763588 +0000 UTC Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.461771 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.461788 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.462037 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.462206 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.462642 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.462819 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.462929 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.468053 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.468704 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.508555 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.530235 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.570996 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.585424 4937 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.587255 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.597988 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.613487 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.627074 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:38 crc kubenswrapper[4937]: I0123 06:33:38.722514 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 23 06:33:39 crc kubenswrapper[4937]: I0123 06:33:39.461388 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 02:46:43.316268162 +0000 UTC Jan 23 06:33:39 crc kubenswrapper[4937]: I0123 06:33:39.526078 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:33:39 crc kubenswrapper[4937]: E0123 06:33:39.526324 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:33:39 crc kubenswrapper[4937]: I0123 06:33:39.526472 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:39 crc kubenswrapper[4937]: E0123 06:33:39.526821 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.211126 4937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.215005 4937 trace.go:236] Trace[964983734]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 06:33:29.118) (total time: 11096ms): Jan 23 06:33:40 crc kubenswrapper[4937]: Trace[964983734]: ---"Objects listed" error: 11096ms (06:33:40.214) Jan 23 06:33:40 crc kubenswrapper[4937]: Trace[964983734]: [11.096564679s] [11.096564679s] END Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.215035 4937 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.215094 4937 trace.go:236] Trace[715530639]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 06:33:28.306) (total time: 11908ms): Jan 23 06:33:40 crc kubenswrapper[4937]: Trace[715530639]: ---"Objects listed" error: 11908ms (06:33:40.214) Jan 23 06:33:40 crc kubenswrapper[4937]: Trace[715530639]: [11.908809304s] [11.908809304s] END Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.215151 4937 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.217866 4937 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.222830 4937 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.246966 4937 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60582->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.247020 4937 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60578->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.247169 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60578->192.168.126.11:17697: read: connection reset by peer" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.247076 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60582->192.168.126.11:17697: read: connection reset by peer" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.247796 4937 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.247836 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.248160 4937 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.248215 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.258554 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.266770 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.274825 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.275232 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.286841 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.298494 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.315018 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318325 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318388 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318418 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318442 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318468 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318496 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318520 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318546 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318646 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318679 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318714 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318741 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318766 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318792 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318821 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318846 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318879 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318884 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318937 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.318916 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319005 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319048 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319110 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319164 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319209 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319248 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319265 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319283 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319330 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319352 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319372 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319412 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319449 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319486 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319498 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319525 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319564 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319631 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319665 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319722 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319779 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319820 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319857 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319902 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319938 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320051 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320098 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320137 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320212 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320250 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320316 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320546 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320644 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320710 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320758 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320816 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320862 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320907 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320971 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321034 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321085 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321131 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321179 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321229 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321290 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321345 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321402 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321450 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321552 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321645 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321705 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321755 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321816 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321867 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321940 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.322009 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.322062 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.322124 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.322175 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.322232 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.322403 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.322482 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.322538 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.322638 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.322714 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.322792 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.322833 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.322869 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.322905 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.322943 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.322978 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.323045 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.323112 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.323169 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.323222 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.323281 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.323331 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.323390 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.323436 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.323720 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.323819 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.323877 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.323936 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.323989 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324037 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324092 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324127 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324161 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324196 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324238 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324303 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324347 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324421 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324466 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324502 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324539 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324578 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324661 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324716 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324762 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324797 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324834 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324871 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324906 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324943 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324984 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325024 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325062 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325099 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325139 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325179 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325214 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325253 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325291 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325328 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325362 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325398 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325433 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325467 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325502 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325536 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325574 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325651 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325689 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325725 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325760 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325795 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325830 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325865 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325899 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325942 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325980 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.326016 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.326050 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.326086 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.326123 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.326165 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.326217 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.326276 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.326333 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.326388 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.326456 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.326509 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.326568 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.326658 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.326719 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.326784 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.326837 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.326891 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.326948 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.326988 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327118 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327160 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327201 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327239 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327282 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327322 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327361 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327397 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327436 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327474 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327523 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327575 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327680 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327743 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327784 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327819 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327857 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327911 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327973 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.328042 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.328101 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.328159 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.328221 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.328318 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.328381 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.328436 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.328480 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.328522 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.328560 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.328652 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.328695 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.329034 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.329198 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.329260 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.329312 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.328729 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.329362 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.329423 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.329478 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.329526 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.329570 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.329676 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.329720 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.329760 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.329801 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.329840 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.330007 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.330045 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319721 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.319801 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320012 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320040 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320095 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320126 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320145 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.331467 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.331507 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.331530 4937 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.331553 4937 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.331574 4937 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320172 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320268 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320315 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320347 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320380 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.331690 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320534 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320535 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320557 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320641 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.331862 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.331981 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320894 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320926 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320974 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321098 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321148 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321173 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.321196 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.323312 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.323344 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.323856 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.323993 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324298 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324539 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324610 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.332394 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324778 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325038 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.324951 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325410 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325424 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.325816 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327336 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.327567 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.332654 4937 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.328852 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.329217 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.329333 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.330902 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.330964 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.330775 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.331001 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.331256 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.331277 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.331662 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.331793 4937 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.320963 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.332041 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.333495 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.333531 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.333535 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:40.833457877 +0000 UTC m=+20.637224690 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.332792 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.333315 4937 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.333853 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.333950 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:40.833721056 +0000 UTC m=+20.637487899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.333295 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.334761 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.335457 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.335764 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.336026 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.333734 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.337905 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.337899 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.343992 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.345029 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.345227 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.345470 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.345829 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.346163 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.346826 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.347313 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.347843 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.348304 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.348657 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.349225 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.349743 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.350429 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.350490 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.350925 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.351013 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.351051 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.351218 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.351387 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.352193 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.352288 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.352401 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.352431 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.352454 4937 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.352543 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:40.852514409 +0000 UTC m=+20.656281092 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.352897 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.353169 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.353471 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.353779 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.354333 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.354423 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.354859 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.355186 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.357026 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.357429 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.357704 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.357964 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.358090 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.358477 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.358536 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.359088 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.359840 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.360271 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.362494 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.367116 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.367735 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.367782 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.367931 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.368123 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.369081 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.369117 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.369138 4937 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.369219 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:40.869186922 +0000 UTC m=+20.672953835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.369303 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.369304 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.369650 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.371250 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:33:40.871076626 +0000 UTC m=+20.674843289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.371297 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.372531 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.372788 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.373161 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.373225 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.373333 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.373460 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.373575 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.373743 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.378852 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.379168 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.379827 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.379829 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.380262 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.380269 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.380753 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.381004 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.381030 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.381086 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.381216 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.381419 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.381729 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.382052 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.382118 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.382543 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.382747 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.383233 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.383823 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.384002 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.384644 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.384680 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.384813 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.385211 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.385282 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.385390 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.385541 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.385546 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.385927 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.385954 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.385998 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.386086 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.386296 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.386297 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.386843 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.387207 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.387389 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.387433 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.387481 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.387827 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.388010 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.388081 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.388293 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.388385 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.388525 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.388645 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.389099 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.389369 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.389793 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.389889 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.389971 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.390326 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.391031 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.391321 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.391378 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.391501 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.391539 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.391769 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.391860 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.392030 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.392310 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.392450 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.392500 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.392750 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.393523 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.393245 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.394176 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.394703 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.395122 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.395239 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.403361 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.405567 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.408466 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.421299 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.432408 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.432489 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.432560 4937 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.432578 4937 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.432611 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.432628 4937 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.432640 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.432652 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.432707 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.432722 4937 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.432733 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433069 4937 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433097 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433111 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433125 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433139 4937 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433151 4937 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433164 4937 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433178 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433191 4937 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433203 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433215 4937 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433227 4937 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433240 4937 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433253 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433265 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433279 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433298 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433314 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433328 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433366 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433380 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433393 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433406 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433419 4937 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433436 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433448 4937 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433462 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.432921 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433524 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433610 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433632 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434071 4937 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434096 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434113 4937 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434127 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.433328 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434140 4937 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434240 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434255 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434269 4937 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.432971 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434282 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434374 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434391 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434403 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434414 4937 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434451 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434463 4937 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434476 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434487 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434498 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434533 4937 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434546 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434562 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434579 4937 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434649 4937 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434663 4937 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434675 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434712 4937 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434726 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434738 4937 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434750 4937 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434815 4937 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434831 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434844 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434889 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434903 4937 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434915 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434928 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434967 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434980 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.434992 4937 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435005 4937 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435031 4937 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435044 4937 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435057 4937 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435069 4937 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435080 4937 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435092 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435109 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435122 4937 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435134 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435146 4937 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435157 4937 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435168 4937 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435179 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435191 4937 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435201 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435211 4937 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435223 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435236 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435248 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435259 4937 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435269 4937 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435281 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435294 4937 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435308 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435319 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435336 4937 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435348 4937 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435360 4937 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435373 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435387 4937 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435398 4937 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435410 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435421 4937 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435432 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435446 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435458 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435470 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435484 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435542 4937 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435557 4937 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435569 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435583 4937 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435617 4937 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435629 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435641 4937 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435653 4937 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435684 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435695 4937 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435707 4937 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435910 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435923 4937 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435934 4937 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435946 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435958 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435969 4937 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435981 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.435992 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436004 4937 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436015 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436027 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436038 4937 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436049 4937 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436060 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436073 4937 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436084 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436095 4937 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436108 4937 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436121 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436134 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436147 4937 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436159 4937 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436171 4937 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436183 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436196 4937 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436207 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436220 4937 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436230 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436238 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436248 4937 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436257 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436267 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436275 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436284 4937 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436293 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.436302 4937 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437657 4937 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437701 4937 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437715 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437730 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437744 4937 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437756 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437770 4937 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437783 4937 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437796 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437808 4937 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437822 4937 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437834 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437847 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437859 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437875 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437888 4937 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437902 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437915 4937 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437928 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437941 4937 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437954 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.437966 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.442677 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.454412 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.462500 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 02:39:41.290009195 +0000 UTC Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.466121 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.489285 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.500084 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.508180 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.525414 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.525542 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.531081 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.531914 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.533886 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.534827 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.536304 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.537193 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.538116 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.538898 4937 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.539396 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.539410 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.540475 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.541765 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.542410 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.543991 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.544865 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.545778 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.546961 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.547669 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.548984 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.549865 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.550927 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.552576 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.553199 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.554506 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.555115 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.556431 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.556977 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.557916 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.559343 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.559770 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.560131 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.561408 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.562051 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.563237 4937 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.563378 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.565788 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.567281 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.567877 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.569930 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.571066 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.571640 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.572267 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.573108 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.574498 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.575202 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.576425 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.577072 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.578050 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.578502 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.579375 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.580029 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.581098 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.581267 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.581405 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.581627 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.583738 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.584383 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.585132 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.585771 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.586309 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.591774 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 06:33:40 crc kubenswrapper[4937]: W0123 06:33:40.598488 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-cdbff145dde25871847c8b08694f89f8ea6b762657117b7c640b77af63f6d2ec WatchSource:0}: Error finding container cdbff145dde25871847c8b08694f89f8ea6b762657117b7c640b77af63f6d2ec: Status 404 returned error can't find the container with id cdbff145dde25871847c8b08694f89f8ea6b762657117b7c640b77af63f6d2ec Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.598524 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.598898 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: W0123 06:33:40.608063 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-0aa625b79022281330551d8c8bcd7f375342afaa8513e30bb10bc1abdef77fdc WatchSource:0}: Error finding container 0aa625b79022281330551d8c8bcd7f375342afaa8513e30bb10bc1abdef77fdc: Status 404 returned error can't find the container with id 0aa625b79022281330551d8c8bcd7f375342afaa8513e30bb10bc1abdef77fdc Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.610670 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.620837 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.633037 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.722016 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ff74739f051fd266da32e81adbf9a9fefe6fefd8a4005996510a33d54beffdb4"} Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.730146 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0aa625b79022281330551d8c8bcd7f375342afaa8513e30bb10bc1abdef77fdc"} Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.742965 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"cdbff145dde25871847c8b08694f89f8ea6b762657117b7c640b77af63f6d2ec"} Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.749378 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.753289 4937 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620" exitCode=255 Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.753363 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620"} Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.765947 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.766364 4937 scope.go:117] "RemoveContainer" containerID="8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.774849 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.795773 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.814167 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.825368 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.841817 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.843772 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.843807 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.843904 4937 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.843963 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:41.84394985 +0000 UTC m=+21.647716503 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.843967 4937 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.844061 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:41.844039822 +0000 UTC m=+21.647806475 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.854158 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.864675 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.878217 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.947099 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.947212 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.947256 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.947390 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.947407 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.947418 4937 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.947465 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:41.947450365 +0000 UTC m=+21.751217018 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.947864 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:33:41.947855606 +0000 UTC m=+21.751622259 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.947922 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.947934 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.947942 4937 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:40 crc kubenswrapper[4937]: E0123 06:33:40.947961 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:41.947955599 +0000 UTC m=+21.751722252 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:40 crc kubenswrapper[4937]: I0123 06:33:40.980729 4937 csr.go:261] certificate signing request csr-w9928 is approved, waiting to be issued Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.010414 4937 csr.go:257] certificate signing request csr-w9928 is issued Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.462737 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:24:56.234056818 +0000 UTC Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.526126 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:41 crc kubenswrapper[4937]: E0123 06:33:41.526326 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.526709 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:33:41 crc kubenswrapper[4937]: E0123 06:33:41.526804 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.653630 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-bhj54"] Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.654011 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.656115 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.656343 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.656826 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.656910 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.657487 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.658666 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-bglvs"] Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.659478 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-js46n"] Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.659835 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.661056 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-dbxzj"] Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.663566 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-js46n" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.663577 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.669829 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.669838 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.669900 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.670123 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.670305 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.675319 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.675360 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.675443 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.675452 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.675698 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.677825 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:41Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.700256 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:41Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.715142 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:41Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.738069 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:41Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.754178 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0df70988-ba4d-42b9-bd64-415fa126969d-system-cni-dir\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.754221 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0df70988-ba4d-42b9-bd64-415fa126969d-cnibin\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.754242 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-hostroot\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.754262 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkphv\" (UniqueName: \"kubernetes.io/projected/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-kube-api-access-dkphv\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.754280 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-cni-binary-copy\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.754377 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-host-run-netns\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.754439 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-multus-daemon-config\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.754530 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9-rootfs\") pod \"machine-config-daemon-bglvs\" (UID: \"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\") " pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.754642 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-host-run-k8s-cni-cncf-io\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.755552 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-host-var-lib-kubelet\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.755697 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-host-run-multus-certs\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.755729 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-os-release\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.755750 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2d929cad-0d4c-472d-94dd-cba5d415d0d3-hosts-file\") pod \"node-resolver-js46n\" (UID: \"2d929cad-0d4c-472d-94dd-cba5d415d0d3\") " pod="openshift-dns/node-resolver-js46n" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.755873 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-host-var-lib-cni-bin\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.755910 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0df70988-ba4d-42b9-bd64-415fa126969d-cni-binary-copy\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.755931 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-host-var-lib-cni-multus\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.755950 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-multus-conf-dir\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.755973 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9-proxy-tls\") pod \"machine-config-daemon-bglvs\" (UID: \"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\") " pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.755999 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdqq2\" (UniqueName: \"kubernetes.io/projected/0df70988-ba4d-42b9-bd64-415fa126969d-kube-api-access-pdqq2\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.756018 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-system-cni-dir\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.756098 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-multus-cni-dir\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.756142 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9-mcd-auth-proxy-config\") pod \"machine-config-daemon-bglvs\" (UID: \"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\") " pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.756162 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6ssf\" (UniqueName: \"kubernetes.io/projected/2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9-kube-api-access-l6ssf\") pod \"machine-config-daemon-bglvs\" (UID: \"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\") " pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.756186 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-multus-socket-dir-parent\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.756238 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0df70988-ba4d-42b9-bd64-415fa126969d-os-release\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.756261 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0df70988-ba4d-42b9-bd64-415fa126969d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.756279 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf8q8\" (UniqueName: \"kubernetes.io/projected/2d929cad-0d4c-472d-94dd-cba5d415d0d3-kube-api-access-gf8q8\") pod \"node-resolver-js46n\" (UID: \"2d929cad-0d4c-472d-94dd-cba5d415d0d3\") " pod="openshift-dns/node-resolver-js46n" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.756300 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-etc-kubernetes\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.756328 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0df70988-ba4d-42b9-bd64-415fa126969d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.756351 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-cnibin\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.758313 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.760030 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765"} Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.761695 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5"} Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.761728 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4"} Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.762741 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137"} Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.768886 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:41Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.802336 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:41Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.827628 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:41Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.844560 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:41Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.857827 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-multus-socket-dir-parent\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.857994 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0df70988-ba4d-42b9-bd64-415fa126969d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858017 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf8q8\" (UniqueName: \"kubernetes.io/projected/2d929cad-0d4c-472d-94dd-cba5d415d0d3-kube-api-access-gf8q8\") pod \"node-resolver-js46n\" (UID: \"2d929cad-0d4c-472d-94dd-cba5d415d0d3\") " pod="openshift-dns/node-resolver-js46n" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858150 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858179 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0df70988-ba4d-42b9-bd64-415fa126969d-os-release\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858215 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-etc-kubernetes\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.857941 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-multus-socket-dir-parent\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858275 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0df70988-ba4d-42b9-bd64-415fa126969d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858402 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-etc-kubernetes\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858420 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-cnibin\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858496 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0df70988-ba4d-42b9-bd64-415fa126969d-system-cni-dir\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858519 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0df70988-ba4d-42b9-bd64-415fa126969d-cnibin\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858533 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-cnibin\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858540 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-hostroot\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858563 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkphv\" (UniqueName: \"kubernetes.io/projected/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-kube-api-access-dkphv\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858573 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0df70988-ba4d-42b9-bd64-415fa126969d-system-cni-dir\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858584 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-cni-binary-copy\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858623 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-host-run-netns\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858644 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-multus-daemon-config\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858647 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0df70988-ba4d-42b9-bd64-415fa126969d-cnibin\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: E0123 06:33:41.858404 4937 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858686 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9-rootfs\") pod \"machine-config-daemon-bglvs\" (UID: \"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\") " pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858680 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0df70988-ba4d-42b9-bd64-415fa126969d-os-release\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: E0123 06:33:41.858739 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:43.858710984 +0000 UTC m=+23.662477637 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858625 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-hostroot\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858663 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9-rootfs\") pod \"machine-config-daemon-bglvs\" (UID: \"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\") " pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858718 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-host-run-netns\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858924 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-host-run-k8s-cni-cncf-io\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858935 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0df70988-ba4d-42b9-bd64-415fa126969d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858952 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-host-var-lib-kubelet\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.858974 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-host-run-multus-certs\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859071 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0df70988-ba4d-42b9-bd64-415fa126969d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859089 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-host-run-k8s-cni-cncf-io\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859093 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-host-run-multus-certs\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859110 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-host-var-lib-kubelet\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859138 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859310 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-os-release\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: E0123 06:33:41.859371 4937 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:33:41 crc kubenswrapper[4937]: E0123 06:33:41.859425 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:43.859412124 +0000 UTC m=+23.663178957 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859618 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-os-release\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859647 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2d929cad-0d4c-472d-94dd-cba5d415d0d3-hosts-file\") pod \"node-resolver-js46n\" (UID: \"2d929cad-0d4c-472d-94dd-cba5d415d0d3\") " pod="openshift-dns/node-resolver-js46n" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859678 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-host-var-lib-cni-bin\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859696 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0df70988-ba4d-42b9-bd64-415fa126969d-cni-binary-copy\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859716 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-host-var-lib-cni-multus\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859732 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-multus-conf-dir\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859765 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdqq2\" (UniqueName: \"kubernetes.io/projected/0df70988-ba4d-42b9-bd64-415fa126969d-kube-api-access-pdqq2\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859779 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-system-cni-dir\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859799 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9-proxy-tls\") pod \"machine-config-daemon-bglvs\" (UID: \"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\") " pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859819 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9-mcd-auth-proxy-config\") pod \"machine-config-daemon-bglvs\" (UID: \"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\") " pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859827 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-cni-binary-copy\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859834 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6ssf\" (UniqueName: \"kubernetes.io/projected/2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9-kube-api-access-l6ssf\") pod \"machine-config-daemon-bglvs\" (UID: \"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\") " pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859865 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-multus-cni-dir\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859879 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-multus-conf-dir\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859927 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2d929cad-0d4c-472d-94dd-cba5d415d0d3-hosts-file\") pod \"node-resolver-js46n\" (UID: \"2d929cad-0d4c-472d-94dd-cba5d415d0d3\") " pod="openshift-dns/node-resolver-js46n" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.859943 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-multus-daemon-config\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.860017 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-multus-cni-dir\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.860181 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-host-var-lib-cni-multus\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.860212 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-host-var-lib-cni-bin\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.860449 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0df70988-ba4d-42b9-bd64-415fa126969d-cni-binary-copy\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.860537 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-system-cni-dir\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.860838 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9-mcd-auth-proxy-config\") pod \"machine-config-daemon-bglvs\" (UID: \"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\") " pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.869892 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:41Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.870475 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9-proxy-tls\") pod \"machine-config-daemon-bglvs\" (UID: \"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\") " pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.882062 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6ssf\" (UniqueName: \"kubernetes.io/projected/2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9-kube-api-access-l6ssf\") pod \"machine-config-daemon-bglvs\" (UID: \"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\") " pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.890301 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkphv\" (UniqueName: \"kubernetes.io/projected/ddcbbc37-6ac2-41e5-a7ea-04de9284c50a-kube-api-access-dkphv\") pod \"multus-bhj54\" (UID: \"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\") " pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.894361 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdqq2\" (UniqueName: \"kubernetes.io/projected/0df70988-ba4d-42b9-bd64-415fa126969d-kube-api-access-pdqq2\") pod \"multus-additional-cni-plugins-dbxzj\" (UID: \"0df70988-ba4d-42b9-bd64-415fa126969d\") " pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.895977 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf8q8\" (UniqueName: \"kubernetes.io/projected/2d929cad-0d4c-472d-94dd-cba5d415d0d3-kube-api-access-gf8q8\") pod \"node-resolver-js46n\" (UID: \"2d929cad-0d4c-472d-94dd-cba5d415d0d3\") " pod="openshift-dns/node-resolver-js46n" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.905931 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:41Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.939389 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:41Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.960357 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.960463 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.960489 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:33:41 crc kubenswrapper[4937]: E0123 06:33:41.960645 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:33:41 crc kubenswrapper[4937]: E0123 06:33:41.960663 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:33:41 crc kubenswrapper[4937]: E0123 06:33:41.960677 4937 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:41 crc kubenswrapper[4937]: E0123 06:33:41.960722 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:43.960709246 +0000 UTC m=+23.764475899 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:41 crc kubenswrapper[4937]: E0123 06:33:41.961047 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:33:43.961024495 +0000 UTC m=+23.764791138 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:33:41 crc kubenswrapper[4937]: E0123 06:33:41.961103 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:33:41 crc kubenswrapper[4937]: E0123 06:33:41.961114 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:33:41 crc kubenswrapper[4937]: E0123 06:33:41.961122 4937 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:41 crc kubenswrapper[4937]: E0123 06:33:41.961147 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:43.961139498 +0000 UTC m=+23.764906151 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.971418 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bhj54" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.984796 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:33:41 crc kubenswrapper[4937]: I0123 06:33:41.993318 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-js46n" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.003496 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.011528 4937 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-23 06:28:41 +0000 UTC, rotation deadline is 2026-10-25 06:07:47.912492822 +0000 UTC Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.011901 4937 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6599h34m5.900595686s for next certificate rotation Jan 23 06:33:42 crc kubenswrapper[4937]: W0123 06:33:42.024183 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d929cad_0d4c_472d_94dd_cba5d415d0d3.slice/crio-fac936c8b48692603ab01cc8b149e2c236ca0aaf4f02db85d2588afcd5db9ecc WatchSource:0}: Error finding container fac936c8b48692603ab01cc8b149e2c236ca0aaf4f02db85d2588afcd5db9ecc: Status 404 returned error can't find the container with id fac936c8b48692603ab01cc8b149e2c236ca0aaf4f02db85d2588afcd5db9ecc Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.028317 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: W0123 06:33:42.036740 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0df70988_ba4d_42b9_bd64_415fa126969d.slice/crio-d13964dabcb23b8150386ce28ac9da7ec8f7ccdaee6a7544a0de22ea04a44b97 WatchSource:0}: Error finding container d13964dabcb23b8150386ce28ac9da7ec8f7ccdaee6a7544a0de22ea04a44b97: Status 404 returned error can't find the container with id d13964dabcb23b8150386ce28ac9da7ec8f7ccdaee6a7544a0de22ea04a44b97 Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.113118 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.166196 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hqgs9"] Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.167110 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.182766 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.182982 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.183002 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.183096 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.183281 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.183324 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.214020 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.214524 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.266253 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-run-ovn-kubernetes\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.266517 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-ovnkube-config\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.266611 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.266684 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-ovn-node-metrics-cert\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.266775 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v77hj\" (UniqueName: \"kubernetes.io/projected/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-kube-api-access-v77hj\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.266857 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-systemd-units\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.266931 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-run-openvswitch\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.267025 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-cni-bin\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.267094 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-env-overrides\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.267168 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-kubelet\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.267242 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-run-netns\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.267306 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-etc-openvswitch\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.267379 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-run-ovn\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.267447 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-var-lib-openvswitch\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.267516 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-log-socket\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.267599 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-run-systemd\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.267668 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-node-log\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.267748 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-slash\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.267811 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-cni-netd\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.267899 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-ovnkube-script-lib\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.270557 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.316158 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.336817 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369163 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369219 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-ovn-node-metrics-cert\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369246 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v77hj\" (UniqueName: \"kubernetes.io/projected/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-kube-api-access-v77hj\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369284 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-systemd-units\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369310 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-run-openvswitch\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369332 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-cni-bin\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369330 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369357 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-env-overrides\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369383 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-run-netns\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369426 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-etc-openvswitch\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369429 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-run-openvswitch\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369449 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-run-ovn\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369475 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-kubelet\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369504 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-var-lib-openvswitch\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369526 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-log-socket\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369547 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-run-systemd\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369568 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-node-log\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369660 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-slash\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369685 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-cni-netd\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369722 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-ovnkube-script-lib\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369753 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-run-ovn-kubernetes\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369780 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-ovnkube-config\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369843 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-systemd-units\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369884 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-var-lib-openvswitch\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.369910 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-cni-bin\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.370209 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-run-netns\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.370250 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-etc-openvswitch\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.370277 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-run-ovn\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.370303 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-kubelet\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.370330 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-slash\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.370357 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-log-socket\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.370382 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-run-systemd\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.370406 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-node-log\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.370522 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-cni-netd\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.370601 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-run-ovn-kubernetes\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.370979 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-ovnkube-config\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.371278 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-ovnkube-script-lib\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.371709 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-env-overrides\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.378256 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-ovn-node-metrics-cert\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.381673 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.395444 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v77hj\" (UniqueName: \"kubernetes.io/projected/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-kube-api-access-v77hj\") pod \"ovnkube-node-hqgs9\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.399731 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.416950 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.432366 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.474462 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 09:21:06.293409296 +0000 UTC Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.477717 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.488136 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.508572 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.525746 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:33:42 crc kubenswrapper[4937]: E0123 06:33:42.525910 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.537265 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.553151 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.575657 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.597460 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.615349 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.628837 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.651689 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.680374 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.697883 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.714050 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.727203 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.743111 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.758272 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.767830 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-js46n" event={"ID":"2d929cad-0d4c-472d-94dd-cba5d415d0d3","Type":"ContainerStarted","Data":"07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055"} Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.767891 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-js46n" event={"ID":"2d929cad-0d4c-472d-94dd-cba5d415d0d3","Type":"ContainerStarted","Data":"fac936c8b48692603ab01cc8b149e2c236ca0aaf4f02db85d2588afcd5db9ecc"} Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.771619 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bhj54" event={"ID":"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a","Type":"ContainerStarted","Data":"755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753"} Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.771702 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bhj54" event={"ID":"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a","Type":"ContainerStarted","Data":"62bfefd560e451cf30ecf7dfee9485bed78ec0babd6880135b67117c5e2bbcb5"} Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.774582 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74"} Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.774677 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d"} Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.774689 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"6635c8da065cb1d670b08c037d3800168071eb4d89013dbcc8f0383acd893898"} Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.775939 4937 generic.go:334] "Generic (PLEG): container finished" podID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerID="78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a" exitCode=0 Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.775995 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerDied","Data":"78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a"} Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.776017 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerStarted","Data":"d5ee99735aa0583e200520ef0ecacb41803a0693581ea665038547c23f94aafc"} Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.778521 4937 generic.go:334] "Generic (PLEG): container finished" podID="0df70988-ba4d-42b9-bd64-415fa126969d" containerID="0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f" exitCode=0 Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.778641 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" event={"ID":"0df70988-ba4d-42b9-bd64-415fa126969d","Type":"ContainerDied","Data":"0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f"} Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.778684 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" event={"ID":"0df70988-ba4d-42b9-bd64-415fa126969d","Type":"ContainerStarted","Data":"d13964dabcb23b8150386ce28ac9da7ec8f7ccdaee6a7544a0de22ea04a44b97"} Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.779404 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.787008 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.803322 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.829890 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.845437 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.867256 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.897434 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.916390 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.954669 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:42 crc kubenswrapper[4937]: I0123 06:33:42.992799 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:42Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.055386 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.088455 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.108115 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.131859 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.154389 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.179454 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.369068 4937 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.371090 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.371121 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.371131 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.371234 4937 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.383665 4937 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.383948 4937 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.385191 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.385266 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.385280 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.385320 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.385336 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:43Z","lastTransitionTime":"2026-01-23T06:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.411396 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.433065 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.433120 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.433133 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.433156 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.433171 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:43Z","lastTransitionTime":"2026-01-23T06:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.459242 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.475771 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 16:03:12.911400964 +0000 UTC Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.475917 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.475965 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.475974 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.475995 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.476009 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:43Z","lastTransitionTime":"2026-01-23T06:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.489575 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.494071 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.494102 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.494111 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.494128 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.494167 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:43Z","lastTransitionTime":"2026-01-23T06:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.507211 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.511087 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.511117 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.511126 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.511142 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.511153 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:43Z","lastTransitionTime":"2026-01-23T06:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.526279 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.526423 4937 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.526959 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.527045 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.527096 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.527239 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.528988 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.529011 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.529021 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.529035 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.529047 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:43Z","lastTransitionTime":"2026-01-23T06:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.632099 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.632438 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.632448 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.632468 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.632479 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:43Z","lastTransitionTime":"2026-01-23T06:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.735178 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.735251 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.735275 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.735308 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.735331 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:43Z","lastTransitionTime":"2026-01-23T06:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.754262 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-wqqs8"] Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.754815 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wqqs8" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.758674 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.758674 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.759420 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.761637 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.779915 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.792134 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244"} Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.795203 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" event={"ID":"0df70988-ba4d-42b9-bd64-415fa126969d","Type":"ContainerStarted","Data":"73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1"} Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.797379 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.801844 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerStarted","Data":"58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e"} Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.801896 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerStarted","Data":"2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100"} Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.815779 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.838266 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.838316 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.838329 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.838362 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.838376 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:43Z","lastTransitionTime":"2026-01-23T06:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.840959 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.877250 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.893023 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.893083 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7387919d-1f76-4e34-9994-194a2a3c5dbb-serviceca\") pod \"node-ca-wqqs8\" (UID: \"7387919d-1f76-4e34-9994-194a2a3c5dbb\") " pod="openshift-image-registry/node-ca-wqqs8" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.893116 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7387919d-1f76-4e34-9994-194a2a3c5dbb-host\") pod \"node-ca-wqqs8\" (UID: \"7387919d-1f76-4e34-9994-194a2a3c5dbb\") " pod="openshift-image-registry/node-ca-wqqs8" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.893133 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rnv6\" (UniqueName: \"kubernetes.io/projected/7387919d-1f76-4e34-9994-194a2a3c5dbb-kube-api-access-9rnv6\") pod \"node-ca-wqqs8\" (UID: \"7387919d-1f76-4e34-9994-194a2a3c5dbb\") " pod="openshift-image-registry/node-ca-wqqs8" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.893155 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.893225 4937 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.893275 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:47.893260288 +0000 UTC m=+27.697026941 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.895230 4937 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.895264 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:47.895256265 +0000 UTC m=+27.699022908 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.895572 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.925178 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.941577 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.941621 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.941631 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.941647 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.941657 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:43Z","lastTransitionTime":"2026-01-23T06:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.947756 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.963393 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.993757 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.993867 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rnv6\" (UniqueName: \"kubernetes.io/projected/7387919d-1f76-4e34-9994-194a2a3c5dbb-kube-api-access-9rnv6\") pod \"node-ca-wqqs8\" (UID: \"7387919d-1f76-4e34-9994-194a2a3c5dbb\") " pod="openshift-image-registry/node-ca-wqqs8" Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.993925 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:33:47.99389542 +0000 UTC m=+27.797662113 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.993979 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.994025 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.994082 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7387919d-1f76-4e34-9994-194a2a3c5dbb-serviceca\") pod \"node-ca-wqqs8\" (UID: \"7387919d-1f76-4e34-9994-194a2a3c5dbb\") " pod="openshift-image-registry/node-ca-wqqs8" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.994122 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7387919d-1f76-4e34-9994-194a2a3c5dbb-host\") pod \"node-ca-wqqs8\" (UID: \"7387919d-1f76-4e34-9994-194a2a3c5dbb\") " pod="openshift-image-registry/node-ca-wqqs8" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.994197 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7387919d-1f76-4e34-9994-194a2a3c5dbb-host\") pod \"node-ca-wqqs8\" (UID: \"7387919d-1f76-4e34-9994-194a2a3c5dbb\") " pod="openshift-image-registry/node-ca-wqqs8" Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.994269 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.994314 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.994342 4937 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.994371 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.994419 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.994424 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:47.994400734 +0000 UTC m=+27.798167387 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.994443 4937 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:43 crc kubenswrapper[4937]: E0123 06:33:43.994535 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:47.994509267 +0000 UTC m=+27.798275920 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.995141 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7387919d-1f76-4e34-9994-194a2a3c5dbb-serviceca\") pod \"node-ca-wqqs8\" (UID: \"7387919d-1f76-4e34-9994-194a2a3c5dbb\") " pod="openshift-image-registry/node-ca-wqqs8" Jan 23 06:33:43 crc kubenswrapper[4937]: I0123 06:33:43.996829 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:43Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.013081 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rnv6\" (UniqueName: \"kubernetes.io/projected/7387919d-1f76-4e34-9994-194a2a3c5dbb-kube-api-access-9rnv6\") pod \"node-ca-wqqs8\" (UID: \"7387919d-1f76-4e34-9994-194a2a3c5dbb\") " pod="openshift-image-registry/node-ca-wqqs8" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.026794 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.044341 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.044374 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.044383 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.044400 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.044411 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:44Z","lastTransitionTime":"2026-01-23T06:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.048063 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.061043 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.076535 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.111543 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.125049 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.142678 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.148207 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.148245 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.148254 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.148273 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.148283 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:44Z","lastTransitionTime":"2026-01-23T06:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.164486 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.174556 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wqqs8" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.193479 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: W0123 06:33:44.202972 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7387919d_1f76_4e34_9994_194a2a3c5dbb.slice/crio-321127f027e19c6f7fe9f461f0b7e5aab4b710d91ffad3de9fc9b0a5b3ef7a76 WatchSource:0}: Error finding container 321127f027e19c6f7fe9f461f0b7e5aab4b710d91ffad3de9fc9b0a5b3ef7a76: Status 404 returned error can't find the container with id 321127f027e19c6f7fe9f461f0b7e5aab4b710d91ffad3de9fc9b0a5b3ef7a76 Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.209179 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.223229 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.237525 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.252957 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.252997 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.253006 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.253026 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.253038 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:44Z","lastTransitionTime":"2026-01-23T06:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.259144 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.273385 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.288461 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.305228 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.323827 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.342221 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.359408 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.360901 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.360955 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.360971 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.360997 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.361014 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:44Z","lastTransitionTime":"2026-01-23T06:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.376029 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.463525 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.464038 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.464051 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.464073 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.464087 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:44Z","lastTransitionTime":"2026-01-23T06:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.476582 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 10:15:03.887194755 +0000 UTC Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.526480 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:33:44 crc kubenswrapper[4937]: E0123 06:33:44.526681 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.566801 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.566846 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.566860 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.566879 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.566891 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:44Z","lastTransitionTime":"2026-01-23T06:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.669862 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.670210 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.670224 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.670244 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.670254 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:44Z","lastTransitionTime":"2026-01-23T06:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.773056 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.773109 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.773120 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.773139 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.773152 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:44Z","lastTransitionTime":"2026-01-23T06:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.807578 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wqqs8" event={"ID":"7387919d-1f76-4e34-9994-194a2a3c5dbb","Type":"ContainerStarted","Data":"83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb"} Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.807657 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wqqs8" event={"ID":"7387919d-1f76-4e34-9994-194a2a3c5dbb","Type":"ContainerStarted","Data":"321127f027e19c6f7fe9f461f0b7e5aab4b710d91ffad3de9fc9b0a5b3ef7a76"} Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.812102 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerStarted","Data":"44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279"} Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.812148 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerStarted","Data":"d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b"} Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.812161 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerStarted","Data":"881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2"} Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.812213 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerStarted","Data":"92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a"} Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.814294 4937 generic.go:334] "Generic (PLEG): container finished" podID="0df70988-ba4d-42b9-bd64-415fa126969d" containerID="73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1" exitCode=0 Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.814584 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" event={"ID":"0df70988-ba4d-42b9-bd64-415fa126969d","Type":"ContainerDied","Data":"73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1"} Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.837902 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.861075 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.878676 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.878716 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.878727 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.878749 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.878767 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:44Z","lastTransitionTime":"2026-01-23T06:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.884057 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.907661 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.922728 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.937971 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.952150 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.970323 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.981756 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.981798 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.981818 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.981838 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.981852 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:44Z","lastTransitionTime":"2026-01-23T06:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.983548 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:44 crc kubenswrapper[4937]: I0123 06:33:44.995935 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.009966 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.023101 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.040353 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.055659 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.067692 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.078976 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.083998 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.084044 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.084058 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.084078 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.084092 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:45Z","lastTransitionTime":"2026-01-23T06:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.091689 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.114456 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.129795 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.143412 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.156477 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.168983 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.178888 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.190120 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.190182 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.190198 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.190218 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.190233 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:45Z","lastTransitionTime":"2026-01-23T06:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.200060 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.222485 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.234483 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.250677 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.261933 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.273866 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.286077 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.292924 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.292951 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.292962 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.292978 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.292991 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:45Z","lastTransitionTime":"2026-01-23T06:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.395483 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.395537 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.395547 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.395567 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.395580 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:45Z","lastTransitionTime":"2026-01-23T06:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.477430 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 22:28:19.904794756 +0000 UTC Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.498424 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.498496 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.498511 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.498537 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.498553 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:45Z","lastTransitionTime":"2026-01-23T06:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.526264 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.526332 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:45 crc kubenswrapper[4937]: E0123 06:33:45.526673 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:33:45 crc kubenswrapper[4937]: E0123 06:33:45.526893 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.601255 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.601304 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.601319 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.601341 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.601356 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:45Z","lastTransitionTime":"2026-01-23T06:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.704089 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.704136 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.704149 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.704170 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.704222 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:45Z","lastTransitionTime":"2026-01-23T06:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.807045 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.807121 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.807142 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.807178 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.807204 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:45Z","lastTransitionTime":"2026-01-23T06:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.818837 4937 generic.go:334] "Generic (PLEG): container finished" podID="0df70988-ba4d-42b9-bd64-415fa126969d" containerID="fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371" exitCode=0 Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.818904 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" event={"ID":"0df70988-ba4d-42b9-bd64-415fa126969d","Type":"ContainerDied","Data":"fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371"} Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.845768 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.863874 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.882841 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.904963 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.909611 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.909651 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.909660 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.909677 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.909692 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:45Z","lastTransitionTime":"2026-01-23T06:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.918864 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.942428 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:45 crc kubenswrapper[4937]: I0123 06:33:45.977914 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.001756 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.012225 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.012283 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.012295 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.012313 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.012327 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:46Z","lastTransitionTime":"2026-01-23T06:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.015454 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.037468 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.067911 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.100618 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.118114 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.118181 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.118196 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.118224 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.118240 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:46Z","lastTransitionTime":"2026-01-23T06:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.132341 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.154453 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.178648 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.220435 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.220471 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.220479 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.220497 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.220509 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:46Z","lastTransitionTime":"2026-01-23T06:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.322606 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.322645 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.322654 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.322672 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.322682 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:46Z","lastTransitionTime":"2026-01-23T06:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.425148 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.425183 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.425192 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.425207 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.425219 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:46Z","lastTransitionTime":"2026-01-23T06:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.478443 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 12:33:49.150747397 +0000 UTC Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.525529 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:33:46 crc kubenswrapper[4937]: E0123 06:33:46.525707 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.527790 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.527833 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.527848 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.527868 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.527884 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:46Z","lastTransitionTime":"2026-01-23T06:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.631568 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.632045 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.632058 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.632076 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.632089 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:46Z","lastTransitionTime":"2026-01-23T06:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.734171 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.734228 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.734240 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.734260 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.734276 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:46Z","lastTransitionTime":"2026-01-23T06:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.825860 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerStarted","Data":"81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e"} Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.828178 4937 generic.go:334] "Generic (PLEG): container finished" podID="0df70988-ba4d-42b9-bd64-415fa126969d" containerID="52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8" exitCode=0 Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.828221 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" event={"ID":"0df70988-ba4d-42b9-bd64-415fa126969d","Type":"ContainerDied","Data":"52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8"} Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.837141 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.837191 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.837204 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.837226 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.837241 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:46Z","lastTransitionTime":"2026-01-23T06:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.848068 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.867105 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.881493 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.894664 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.910879 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.923997 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.939366 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.939418 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.939430 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.939449 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.939464 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:46Z","lastTransitionTime":"2026-01-23T06:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.941012 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.954056 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.970408 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.981310 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:46 crc kubenswrapper[4937]: I0123 06:33:46.999742 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:46Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.021250 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.036696 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.041423 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.041455 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.041463 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.041480 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.041490 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:47Z","lastTransitionTime":"2026-01-23T06:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.057628 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.071223 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.143526 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.143815 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.143960 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.144085 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.144168 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:47Z","lastTransitionTime":"2026-01-23T06:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.247318 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.247841 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.248041 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.248225 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.248566 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:47Z","lastTransitionTime":"2026-01-23T06:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.351271 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.351561 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.351665 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.351776 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.351864 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:47Z","lastTransitionTime":"2026-01-23T06:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.454606 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.454862 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.454923 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.455021 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.455082 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:47Z","lastTransitionTime":"2026-01-23T06:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.479178 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:12:10.411448077 +0000 UTC Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.525704 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:47 crc kubenswrapper[4937]: E0123 06:33:47.526179 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.526379 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:33:47 crc kubenswrapper[4937]: E0123 06:33:47.526537 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.559895 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.560169 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.560287 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.560534 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.560727 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:47Z","lastTransitionTime":"2026-01-23T06:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.663672 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.663955 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.664038 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.664118 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.664192 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:47Z","lastTransitionTime":"2026-01-23T06:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.766950 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.767007 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.767020 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.767044 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.767059 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:47Z","lastTransitionTime":"2026-01-23T06:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.835168 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" event={"ID":"0df70988-ba4d-42b9-bd64-415fa126969d","Type":"ContainerStarted","Data":"078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca"} Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.856780 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.868047 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.870881 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.870910 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.870920 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.870941 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.870955 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:47Z","lastTransitionTime":"2026-01-23T06:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.878909 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.893239 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.912578 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.928054 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.940698 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.941428 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.941515 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:47 crc kubenswrapper[4937]: E0123 06:33:47.941647 4937 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:33:47 crc kubenswrapper[4937]: E0123 06:33:47.941726 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:55.941706858 +0000 UTC m=+35.745473501 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:33:47 crc kubenswrapper[4937]: E0123 06:33:47.941647 4937 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:33:47 crc kubenswrapper[4937]: E0123 06:33:47.941818 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:55.94178991 +0000 UTC m=+35.745556653 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.958155 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.970491 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.973153 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.973194 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.973210 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.973245 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.973259 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:47Z","lastTransitionTime":"2026-01-23T06:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.982967 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:47 crc kubenswrapper[4937]: I0123 06:33:47.997806 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:47Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.009805 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:48Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.021194 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:48Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.032335 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:48Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.041479 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:48Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.042716 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.042875 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:33:48 crc kubenswrapper[4937]: E0123 06:33:48.042916 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:33:56.042878246 +0000 UTC m=+35.846644899 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.042983 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:33:48 crc kubenswrapper[4937]: E0123 06:33:48.043023 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:33:48 crc kubenswrapper[4937]: E0123 06:33:48.043061 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:33:48 crc kubenswrapper[4937]: E0123 06:33:48.043078 4937 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:48 crc kubenswrapper[4937]: E0123 06:33:48.043086 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:33:48 crc kubenswrapper[4937]: E0123 06:33:48.043100 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:33:48 crc kubenswrapper[4937]: E0123 06:33:48.043111 4937 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:48 crc kubenswrapper[4937]: E0123 06:33:48.043140 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:56.043123183 +0000 UTC m=+35.846889836 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:48 crc kubenswrapper[4937]: E0123 06:33:48.043156 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:56.043150844 +0000 UTC m=+35.846917497 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.075840 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.075898 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.075915 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.075935 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.075951 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:48Z","lastTransitionTime":"2026-01-23T06:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.180142 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.180213 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.180229 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.180254 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.180270 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:48Z","lastTransitionTime":"2026-01-23T06:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.282934 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.283008 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.283020 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.283036 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.283046 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:48Z","lastTransitionTime":"2026-01-23T06:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.386266 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.386301 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.386311 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.386326 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.386341 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:48Z","lastTransitionTime":"2026-01-23T06:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.480215 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:25:47.76278262 +0000 UTC Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.489038 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.489084 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.489097 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.489118 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.489133 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:48Z","lastTransitionTime":"2026-01-23T06:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.525716 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:33:48 crc kubenswrapper[4937]: E0123 06:33:48.525903 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.591853 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.591935 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.591951 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.591972 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.591985 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:48Z","lastTransitionTime":"2026-01-23T06:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.694848 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.694896 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.694915 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.694938 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.694954 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:48Z","lastTransitionTime":"2026-01-23T06:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.797347 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.797414 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.797441 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.797471 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.797493 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:48Z","lastTransitionTime":"2026-01-23T06:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.841793 4937 generic.go:334] "Generic (PLEG): container finished" podID="0df70988-ba4d-42b9-bd64-415fa126969d" containerID="078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca" exitCode=0 Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.841863 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" event={"ID":"0df70988-ba4d-42b9-bd64-415fa126969d","Type":"ContainerDied","Data":"078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca"} Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.856928 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:48Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.868903 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:48Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.883038 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:48Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.895489 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:48Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.900633 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.900684 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.900695 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.900716 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.900731 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:48Z","lastTransitionTime":"2026-01-23T06:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.915831 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:48Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.942867 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:48Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.960132 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:48Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:48 crc kubenswrapper[4937]: I0123 06:33:48.984575 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:48Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.003823 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.003871 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.003883 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.003900 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.003911 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:49Z","lastTransitionTime":"2026-01-23T06:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.004347 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.023281 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.035866 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.047259 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.060130 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.073048 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.086211 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.106079 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.106118 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.106204 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.106221 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.106231 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:49Z","lastTransitionTime":"2026-01-23T06:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.208850 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.208884 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.208893 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.208912 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.208922 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:49Z","lastTransitionTime":"2026-01-23T06:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.311907 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.311940 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.311950 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.311967 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.311978 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:49Z","lastTransitionTime":"2026-01-23T06:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.414895 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.414933 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.414945 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.414964 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.414979 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:49Z","lastTransitionTime":"2026-01-23T06:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.480574 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 15:05:13.27522651 +0000 UTC Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.518007 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.518062 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.518109 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.518131 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.518143 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:49Z","lastTransitionTime":"2026-01-23T06:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.526303 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:49 crc kubenswrapper[4937]: E0123 06:33:49.526467 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.526581 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:33:49 crc kubenswrapper[4937]: E0123 06:33:49.526708 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.621360 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.621416 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.621433 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.621458 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.621473 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:49Z","lastTransitionTime":"2026-01-23T06:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.723963 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.724027 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.724049 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.724075 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.724096 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:49Z","lastTransitionTime":"2026-01-23T06:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.826638 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.826693 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.826709 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.826733 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.826752 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:49Z","lastTransitionTime":"2026-01-23T06:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.851565 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerStarted","Data":"604463b61fd01c33dfffa6476b97e92e9dcd1bcc8c295b6b0a66bd19136f0691"} Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.856858 4937 generic.go:334] "Generic (PLEG): container finished" podID="0df70988-ba4d-42b9-bd64-415fa126969d" containerID="1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a" exitCode=0 Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.856901 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" event={"ID":"0df70988-ba4d-42b9-bd64-415fa126969d","Type":"ContainerDied","Data":"1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a"} Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.870281 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.891041 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.908911 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.929644 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.929684 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.929695 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.929710 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.929720 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:49Z","lastTransitionTime":"2026-01-23T06:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.930947 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.947124 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.963553 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.978565 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:49 crc kubenswrapper[4937]: I0123 06:33:49.992627 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.004024 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.020364 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.033966 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.034046 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.034081 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.034167 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.034186 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:50Z","lastTransitionTime":"2026-01-23T06:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.043646 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.066227 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://604463b61fd01c33dfffa6476b97e92e9dcd1bcc8c295b6b0a66bd19136f0691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.083368 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.097522 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.111643 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.125113 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.138957 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.139028 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.139079 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.139095 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.139127 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.139147 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:50Z","lastTransitionTime":"2026-01-23T06:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.152417 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.166675 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.181870 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.199438 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.218400 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.237952 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://604463b61fd01c33dfffa6476b97e92e9dcd1bcc8c295b6b0a66bd19136f0691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.241874 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.241908 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.241919 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.241935 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.241947 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:50Z","lastTransitionTime":"2026-01-23T06:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.254330 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.270746 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.286605 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.301769 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.313132 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.324796 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.338083 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.346506 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.346538 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.346548 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.346566 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.346580 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:50Z","lastTransitionTime":"2026-01-23T06:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.370697 4937 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.448838 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.448893 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.448905 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.448927 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.448943 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:50Z","lastTransitionTime":"2026-01-23T06:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.481674 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 12:07:11.268232882 +0000 UTC Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.525760 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:33:50 crc kubenswrapper[4937]: E0123 06:33:50.525941 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.547222 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.552832 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.552865 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.552877 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.552898 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.552912 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:50Z","lastTransitionTime":"2026-01-23T06:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.561438 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.577563 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.588186 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.604092 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.624389 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.639438 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.654027 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.666048 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.666098 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.666107 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.666127 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.666140 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:50Z","lastTransitionTime":"2026-01-23T06:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.669008 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.680991 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.693745 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.717927 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://604463b61fd01c33dfffa6476b97e92e9dcd1bcc8c295b6b0a66bd19136f0691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.735539 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.753707 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.768844 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.768918 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.768939 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.768969 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.768993 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:50Z","lastTransitionTime":"2026-01-23T06:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.775144 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.864543 4937 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.866927 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" event={"ID":"0df70988-ba4d-42b9-bd64-415fa126969d","Type":"ContainerStarted","Data":"2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6"} Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.866979 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.867047 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.871542 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.871577 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.871607 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.871624 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.871639 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:50Z","lastTransitionTime":"2026-01-23T06:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.898963 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.926053 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.943625 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.959070 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.964486 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.964975 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.984144 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.985325 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.985371 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.985392 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.985416 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:50 crc kubenswrapper[4937]: I0123 06:33:50.985436 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:50Z","lastTransitionTime":"2026-01-23T06:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.001060 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.023153 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.058862 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://604463b61fd01c33dfffa6476b97e92e9dcd1bcc8c295b6b0a66bd19136f0691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.076397 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.087526 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.087566 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.087575 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.087614 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.087626 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:51Z","lastTransitionTime":"2026-01-23T06:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.092361 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.106477 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.120146 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.133493 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.148736 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.159581 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.169339 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.180976 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.190052 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.190095 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.190104 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.190120 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.190131 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:51Z","lastTransitionTime":"2026-01-23T06:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.194307 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.208185 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.219332 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.231265 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.240344 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.249383 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.263197 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.289150 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.292926 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.292960 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.292970 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.292985 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.292996 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:51Z","lastTransitionTime":"2026-01-23T06:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.306134 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.332070 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://604463b61fd01c33dfffa6476b97e92e9dcd1bcc8c295b6b0a66bd19136f0691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.346690 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.362722 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.385791 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.396095 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.396161 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.396173 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.396196 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.396209 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:51Z","lastTransitionTime":"2026-01-23T06:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.482635 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 04:22:57.203415706 +0000 UTC Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.498910 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.498968 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.498978 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.498997 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.499010 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:51Z","lastTransitionTime":"2026-01-23T06:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.526085 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.526106 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:51 crc kubenswrapper[4937]: E0123 06:33:51.526261 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:33:51 crc kubenswrapper[4937]: E0123 06:33:51.526396 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.601860 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.601953 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.601965 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.601995 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.602008 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:51Z","lastTransitionTime":"2026-01-23T06:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.705037 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.705091 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.705106 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.705127 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.705140 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:51Z","lastTransitionTime":"2026-01-23T06:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.807794 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.807848 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.807861 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.807881 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.807897 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:51Z","lastTransitionTime":"2026-01-23T06:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.867833 4937 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.910654 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.910707 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.910718 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.910740 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:51 crc kubenswrapper[4937]: I0123 06:33:51.910755 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:51Z","lastTransitionTime":"2026-01-23T06:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.013357 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.013406 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.013416 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.013437 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.013449 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:52Z","lastTransitionTime":"2026-01-23T06:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.116220 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.116277 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.116289 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.116309 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.116321 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:52Z","lastTransitionTime":"2026-01-23T06:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.219555 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.219687 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.219706 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.219737 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.219764 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:52Z","lastTransitionTime":"2026-01-23T06:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.323407 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.323500 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.323523 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.323567 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.323587 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:52Z","lastTransitionTime":"2026-01-23T06:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.426690 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.426760 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.426779 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.426804 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.426823 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:52Z","lastTransitionTime":"2026-01-23T06:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.483568 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 15:58:37.564062975 +0000 UTC Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.525665 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:33:52 crc kubenswrapper[4937]: E0123 06:33:52.525857 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.535146 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.535197 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.535211 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.535231 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.535246 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:52Z","lastTransitionTime":"2026-01-23T06:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.638405 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.638465 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.638480 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.638505 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.638522 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:52Z","lastTransitionTime":"2026-01-23T06:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.741581 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.741639 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.741654 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.741673 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.741685 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:52Z","lastTransitionTime":"2026-01-23T06:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.845181 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.845243 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.845257 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.845280 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.845311 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:52Z","lastTransitionTime":"2026-01-23T06:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.875659 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovnkube-controller/0.log" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.882730 4937 generic.go:334] "Generic (PLEG): container finished" podID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerID="604463b61fd01c33dfffa6476b97e92e9dcd1bcc8c295b6b0a66bd19136f0691" exitCode=1 Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.882866 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerDied","Data":"604463b61fd01c33dfffa6476b97e92e9dcd1bcc8c295b6b0a66bd19136f0691"} Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.884133 4937 scope.go:117] "RemoveContainer" containerID="604463b61fd01c33dfffa6476b97e92e9dcd1bcc8c295b6b0a66bd19136f0691" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.905654 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:52Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.944053 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:52Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.949035 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.949081 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.949095 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.949121 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.949134 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:52Z","lastTransitionTime":"2026-01-23T06:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:52 crc kubenswrapper[4937]: I0123 06:33:52.980138 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:52Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.012650 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.041205 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.051875 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.051913 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.051925 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.051942 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.051952 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:53Z","lastTransitionTime":"2026-01-23T06:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.052901 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.063151 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.079018 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.099721 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.121394 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.138319 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.154714 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.154763 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.154773 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.154790 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.154803 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:53Z","lastTransitionTime":"2026-01-23T06:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.156184 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.175129 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.199449 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://604463b61fd01c33dfffa6476b97e92e9dcd1bcc8c295b6b0a66bd19136f0691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://604463b61fd01c33dfffa6476b97e92e9dcd1bcc8c295b6b0a66bd19136f0691\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:33:51Z\\\",\\\"message\\\":\\\"] Sending *v1.Node event handler 2 for removal\\\\nI0123 06:33:51.843767 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 06:33:51.842259 6207 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:33:51.843828 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 06:33:51.843887 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 06:33:51.843899 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 06:33:51.843928 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 06:33:51.843940 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:33:51.843956 6207 factory.go:656] Stopping watch factory\\\\nI0123 06:33:51.843974 6207 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 06:33:51.843996 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:33:51.844010 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:33:51.844022 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:33:51.843918 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 06:33:51.844048 6207 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.216987 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.258443 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.258511 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.258563 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.258615 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.258631 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:53Z","lastTransitionTime":"2026-01-23T06:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.361362 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.361420 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.361434 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.361454 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.361470 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:53Z","lastTransitionTime":"2026-01-23T06:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.465008 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.465080 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.465102 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.465133 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.465158 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:53Z","lastTransitionTime":"2026-01-23T06:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.484434 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:02:36.44192019 +0000 UTC Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.526269 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:33:53 crc kubenswrapper[4937]: E0123 06:33:53.526467 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.527160 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:53 crc kubenswrapper[4937]: E0123 06:33:53.527484 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.570033 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.570072 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.570082 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.570111 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.570124 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:53Z","lastTransitionTime":"2026-01-23T06:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.672998 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.673066 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.673081 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.673104 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.673119 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:53Z","lastTransitionTime":"2026-01-23T06:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.697581 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.697668 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.697690 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.697715 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.697731 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:53Z","lastTransitionTime":"2026-01-23T06:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:53 crc kubenswrapper[4937]: E0123 06:33:53.714709 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.720973 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.721010 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.721021 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.721039 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.721053 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:53Z","lastTransitionTime":"2026-01-23T06:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:53 crc kubenswrapper[4937]: E0123 06:33:53.739295 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.744467 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.744691 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.744845 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.744931 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.744993 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:53Z","lastTransitionTime":"2026-01-23T06:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:53 crc kubenswrapper[4937]: E0123 06:33:53.758174 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.763387 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.763498 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.763513 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.763535 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.763548 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:53Z","lastTransitionTime":"2026-01-23T06:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:53 crc kubenswrapper[4937]: E0123 06:33:53.781989 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.786949 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.787009 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.787023 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.787044 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.787060 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:53Z","lastTransitionTime":"2026-01-23T06:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:53 crc kubenswrapper[4937]: E0123 06:33:53.805519 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: E0123 06:33:53.805934 4937 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.808137 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.808200 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.808213 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.808235 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.808252 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:53Z","lastTransitionTime":"2026-01-23T06:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.895841 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovnkube-controller/0.log" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.900902 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerStarted","Data":"fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20"} Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.901021 4937 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.910223 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.910279 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.910294 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.910320 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.910333 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:53Z","lastTransitionTime":"2026-01-23T06:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.924003 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.951480 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.969240 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.983367 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:53 crc kubenswrapper[4937]: I0123 06:33:53.996921 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:53Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.010493 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.012447 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.012469 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.012480 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.012498 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.012510 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:54Z","lastTransitionTime":"2026-01-23T06:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.024356 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.051042 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://604463b61fd01c33dfffa6476b97e92e9dcd1bcc8c295b6b0a66bd19136f0691\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:33:51Z\\\",\\\"message\\\":\\\"] Sending *v1.Node event handler 2 for removal\\\\nI0123 06:33:51.843767 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 06:33:51.842259 6207 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:33:51.843828 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 06:33:51.843887 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 06:33:51.843899 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 06:33:51.843928 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 06:33:51.843940 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:33:51.843956 6207 factory.go:656] Stopping watch factory\\\\nI0123 06:33:51.843974 6207 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 06:33:51.843996 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:33:51.844010 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:33:51.844022 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:33:51.843918 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 06:33:51.844048 6207 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.064829 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.077541 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.095681 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.109527 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.117674 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.117722 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.117744 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.117767 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.117782 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:54Z","lastTransitionTime":"2026-01-23T06:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.124537 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.142115 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.154383 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.220456 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.220515 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.220529 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.220549 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.220566 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:54Z","lastTransitionTime":"2026-01-23T06:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.323529 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.323614 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.323624 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.323644 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.323655 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:54Z","lastTransitionTime":"2026-01-23T06:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.426935 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.427011 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.427029 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.427058 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.427079 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:54Z","lastTransitionTime":"2026-01-23T06:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.485437 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:50:08.301520297 +0000 UTC Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.525909 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:33:54 crc kubenswrapper[4937]: E0123 06:33:54.526169 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.529609 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.529651 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.529663 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.529681 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.529695 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:54Z","lastTransitionTime":"2026-01-23T06:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.631996 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.632038 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.632051 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.632069 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.632082 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:54Z","lastTransitionTime":"2026-01-23T06:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.735469 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.735527 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.735538 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.735560 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.735574 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:54Z","lastTransitionTime":"2026-01-23T06:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.838495 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.838550 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.838562 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.838584 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.838625 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:54Z","lastTransitionTime":"2026-01-23T06:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.905796 4937 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.910994 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.940980 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.941235 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.941328 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.941450 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:54 crc kubenswrapper[4937]: I0123 06:33:54.941557 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:54Z","lastTransitionTime":"2026-01-23T06:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.044532 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.044882 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.044967 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.045130 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.045246 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:55Z","lastTransitionTime":"2026-01-23T06:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.148791 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.148879 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.148902 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.148935 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.148955 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:55Z","lastTransitionTime":"2026-01-23T06:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.189341 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc"] Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.191196 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.195123 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.195644 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.215439 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.223471 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61037bae-85c4-470e-896a-24431192c708-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q5fcc\" (UID: \"61037bae-85c4-470e-896a-24431192c708\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.223766 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrbff\" (UniqueName: \"kubernetes.io/projected/61037bae-85c4-470e-896a-24431192c708-kube-api-access-qrbff\") pod \"ovnkube-control-plane-749d76644c-q5fcc\" (UID: \"61037bae-85c4-470e-896a-24431192c708\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.223935 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61037bae-85c4-470e-896a-24431192c708-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q5fcc\" (UID: \"61037bae-85c4-470e-896a-24431192c708\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.224108 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61037bae-85c4-470e-896a-24431192c708-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q5fcc\" (UID: \"61037bae-85c4-470e-896a-24431192c708\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.232577 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.252133 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.252356 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.252425 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.252690 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.252858 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.252972 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:55Z","lastTransitionTime":"2026-01-23T06:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.271722 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.291949 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.305454 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.322674 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.325040 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61037bae-85c4-470e-896a-24431192c708-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q5fcc\" (UID: \"61037bae-85c4-470e-896a-24431192c708\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.325098 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61037bae-85c4-470e-896a-24431192c708-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q5fcc\" (UID: \"61037bae-85c4-470e-896a-24431192c708\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.325147 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrbff\" (UniqueName: \"kubernetes.io/projected/61037bae-85c4-470e-896a-24431192c708-kube-api-access-qrbff\") pod \"ovnkube-control-plane-749d76644c-q5fcc\" (UID: \"61037bae-85c4-470e-896a-24431192c708\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.325180 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61037bae-85c4-470e-896a-24431192c708-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q5fcc\" (UID: \"61037bae-85c4-470e-896a-24431192c708\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.325949 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/61037bae-85c4-470e-896a-24431192c708-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q5fcc\" (UID: \"61037bae-85c4-470e-896a-24431192c708\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.326104 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/61037bae-85c4-470e-896a-24431192c708-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q5fcc\" (UID: \"61037bae-85c4-470e-896a-24431192c708\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.335625 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/61037bae-85c4-470e-896a-24431192c708-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q5fcc\" (UID: \"61037bae-85c4-470e-896a-24431192c708\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.340424 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.346415 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrbff\" (UniqueName: \"kubernetes.io/projected/61037bae-85c4-470e-896a-24431192c708-kube-api-access-qrbff\") pod \"ovnkube-control-plane-749d76644c-q5fcc\" (UID: \"61037bae-85c4-470e-896a-24431192c708\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.354091 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.356194 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.356241 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.356256 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.356280 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.356293 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:55Z","lastTransitionTime":"2026-01-23T06:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.372821 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.412352 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.432921 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.455125 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://604463b61fd01c33dfffa6476b97e92e9dcd1bcc8c295b6b0a66bd19136f0691\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:33:51Z\\\",\\\"message\\\":\\\"] Sending *v1.Node event handler 2 for removal\\\\nI0123 06:33:51.843767 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 06:33:51.842259 6207 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:33:51.843828 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 06:33:51.843887 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 06:33:51.843899 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 06:33:51.843928 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 06:33:51.843940 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:33:51.843956 6207 factory.go:656] Stopping watch factory\\\\nI0123 06:33:51.843974 6207 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 06:33:51.843996 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:33:51.844010 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:33:51.844022 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:33:51.843918 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 06:33:51.844048 6207 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.459584 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.459666 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.459686 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.459711 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.459755 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:55Z","lastTransitionTime":"2026-01-23T06:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.469654 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.484960 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.486018 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 20:33:02.298086123 +0000 UTC Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.500751 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.515147 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.525702 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.525820 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:33:55 crc kubenswrapper[4937]: E0123 06:33:55.525936 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:33:55 crc kubenswrapper[4937]: E0123 06:33:55.526101 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.566627 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.566667 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.566681 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.566701 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.566714 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:55Z","lastTransitionTime":"2026-01-23T06:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.669403 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.669445 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.669459 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.669483 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.669501 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:55Z","lastTransitionTime":"2026-01-23T06:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.771884 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.771942 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.771960 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.771988 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.772005 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:55Z","lastTransitionTime":"2026-01-23T06:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.875077 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.875136 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.875152 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.875176 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.875192 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:55Z","lastTransitionTime":"2026-01-23T06:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.911511 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" event={"ID":"61037bae-85c4-470e-896a-24431192c708","Type":"ContainerStarted","Data":"30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb"} Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.911585 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" event={"ID":"61037bae-85c4-470e-896a-24431192c708","Type":"ContainerStarted","Data":"80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4"} Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.911617 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" event={"ID":"61037bae-85c4-470e-896a-24431192c708","Type":"ContainerStarted","Data":"ffe27311ceee81cd589137c4754244ffccfa71207b2edae0638fa429804bcd09"} Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.913527 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovnkube-controller/1.log" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.914132 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovnkube-controller/0.log" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.918215 4937 generic.go:334] "Generic (PLEG): container finished" podID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerID="fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20" exitCode=1 Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.918258 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerDied","Data":"fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20"} Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.918324 4937 scope.go:117] "RemoveContainer" containerID="604463b61fd01c33dfffa6476b97e92e9dcd1bcc8c295b6b0a66bd19136f0691" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.919107 4937 scope.go:117] "RemoveContainer" containerID="fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20" Jan 23 06:33:55 crc kubenswrapper[4937]: E0123 06:33:55.919305 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.931619 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.951416 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.967162 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.977896 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.977934 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.977943 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.977961 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.977974 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:55Z","lastTransitionTime":"2026-01-23T06:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:55 crc kubenswrapper[4937]: I0123 06:33:55.988654 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.014075 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.031897 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.032028 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:56 crc kubenswrapper[4937]: E0123 06:33:56.032070 4937 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:33:56 crc kubenswrapper[4937]: E0123 06:33:56.032367 4937 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:33:56 crc kubenswrapper[4937]: E0123 06:33:56.032635 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:34:12.032156041 +0000 UTC m=+51.835922694 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:33:56 crc kubenswrapper[4937]: E0123 06:33:56.032678 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:34:12.032664896 +0000 UTC m=+51.836431549 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.035969 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.048540 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.061473 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.073643 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.080549 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.080582 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.080605 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.080621 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.080632 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:56Z","lastTransitionTime":"2026-01-23T06:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.089945 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.111361 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.128500 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.132643 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.132837 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:33:56 crc kubenswrapper[4937]: E0123 06:33:56.132909 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:34:12.132865356 +0000 UTC m=+51.936631999 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.132986 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:33:56 crc kubenswrapper[4937]: E0123 06:33:56.133032 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:33:56 crc kubenswrapper[4937]: E0123 06:33:56.133056 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:33:56 crc kubenswrapper[4937]: E0123 06:33:56.133073 4937 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:56 crc kubenswrapper[4937]: E0123 06:33:56.133155 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 06:34:12.133136853 +0000 UTC m=+51.936903506 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:56 crc kubenswrapper[4937]: E0123 06:33:56.133240 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:33:56 crc kubenswrapper[4937]: E0123 06:33:56.133286 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:33:56 crc kubenswrapper[4937]: E0123 06:33:56.133312 4937 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:56 crc kubenswrapper[4937]: E0123 06:33:56.133402 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 06:34:12.13337204 +0000 UTC m=+51.937138723 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.144520 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.162256 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.174402 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.183752 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.183802 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.183817 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.183838 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.183851 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:56Z","lastTransitionTime":"2026-01-23T06:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.193518 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://604463b61fd01c33dfffa6476b97e92e9dcd1bcc8c295b6b0a66bd19136f0691\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:33:51Z\\\",\\\"message\\\":\\\"] Sending *v1.Node event handler 2 for removal\\\\nI0123 06:33:51.843767 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 06:33:51.842259 6207 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:33:51.843828 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 06:33:51.843887 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 06:33:51.843899 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 06:33:51.843928 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 06:33:51.843940 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:33:51.843956 6207 factory.go:656] Stopping watch factory\\\\nI0123 06:33:51.843974 6207 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 06:33:51.843996 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:33:51.844010 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:33:51.844022 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:33:51.843918 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 06:33:51.844048 6207 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.208536 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.221435 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.233638 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.246801 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.260331 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.272457 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.285796 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.285837 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.285849 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.285868 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.285880 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:56Z","lastTransitionTime":"2026-01-23T06:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.291486 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.312269 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.327481 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.345264 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.360491 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.371434 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.389089 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.389175 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.389188 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.389209 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.389224 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:56Z","lastTransitionTime":"2026-01-23T06:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.392628 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://604463b61fd01c33dfffa6476b97e92e9dcd1bcc8c295b6b0a66bd19136f0691\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:33:51Z\\\",\\\"message\\\":\\\"] Sending *v1.Node event handler 2 for removal\\\\nI0123 06:33:51.843767 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 06:33:51.842259 6207 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:33:51.843828 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 06:33:51.843887 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 06:33:51.843899 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 06:33:51.843928 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 06:33:51.843940 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:33:51.843956 6207 factory.go:656] Stopping watch factory\\\\nI0123 06:33:51.843974 6207 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 06:33:51.843996 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:33:51.844010 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:33:51.844022 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:33:51.843918 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 06:33:51.844048 6207 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"message\\\":\\\"match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d937b3b3-82c3-4791-9a66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0123 06:33:54.767321 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.405447 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.417656 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.434013 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.486496 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 22:21:27.556241612 +0000 UTC Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.492161 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.492225 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.492243 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.492266 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.492283 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:56Z","lastTransitionTime":"2026-01-23T06:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.525989 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:33:56 crc kubenswrapper[4937]: E0123 06:33:56.526242 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.595495 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.595580 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.595614 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.595633 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.595648 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:56Z","lastTransitionTime":"2026-01-23T06:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.683379 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-7ksbw"] Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.684034 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:33:56 crc kubenswrapper[4937]: E0123 06:33:56.684148 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.699619 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.699689 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.699708 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.699736 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.699754 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:56Z","lastTransitionTime":"2026-01-23T06:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.701902 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.720638 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.736340 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f394d17-1f72-43ba-8d51-b76e56dd6849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7ksbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.739301 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k294p\" (UniqueName: \"kubernetes.io/projected/5f394d17-1f72-43ba-8d51-b76e56dd6849-kube-api-access-k294p\") pod \"network-metrics-daemon-7ksbw\" (UID: \"5f394d17-1f72-43ba-8d51-b76e56dd6849\") " pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.739474 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs\") pod \"network-metrics-daemon-7ksbw\" (UID: \"5f394d17-1f72-43ba-8d51-b76e56dd6849\") " pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.752952 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.771863 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.789139 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.803506 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.803558 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.803568 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.803602 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.803619 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:56Z","lastTransitionTime":"2026-01-23T06:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.809327 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.824059 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.837814 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.840133 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k294p\" (UniqueName: \"kubernetes.io/projected/5f394d17-1f72-43ba-8d51-b76e56dd6849-kube-api-access-k294p\") pod \"network-metrics-daemon-7ksbw\" (UID: \"5f394d17-1f72-43ba-8d51-b76e56dd6849\") " pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.840189 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs\") pod \"network-metrics-daemon-7ksbw\" (UID: \"5f394d17-1f72-43ba-8d51-b76e56dd6849\") " pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:33:56 crc kubenswrapper[4937]: E0123 06:33:56.840318 4937 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:33:56 crc kubenswrapper[4937]: E0123 06:33:56.840383 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs podName:5f394d17-1f72-43ba-8d51-b76e56dd6849 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:57.340360868 +0000 UTC m=+37.144127521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs") pod "network-metrics-daemon-7ksbw" (UID: "5f394d17-1f72-43ba-8d51-b76e56dd6849") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.850895 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.860148 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k294p\" (UniqueName: \"kubernetes.io/projected/5f394d17-1f72-43ba-8d51-b76e56dd6849-kube-api-access-k294p\") pod \"network-metrics-daemon-7ksbw\" (UID: \"5f394d17-1f72-43ba-8d51-b76e56dd6849\") " pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.867627 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.892568 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.906434 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.906538 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.906560 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.906750 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.906774 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:56Z","lastTransitionTime":"2026-01-23T06:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.912404 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.924005 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovnkube-controller/1.log" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.928275 4937 scope.go:117] "RemoveContainer" containerID="fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20" Jan 23 06:33:56 crc kubenswrapper[4937]: E0123 06:33:56.928484 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.938801 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://604463b61fd01c33dfffa6476b97e92e9dcd1bcc8c295b6b0a66bd19136f0691\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:33:51Z\\\",\\\"message\\\":\\\"] Sending *v1.Node event handler 2 for removal\\\\nI0123 06:33:51.843767 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 06:33:51.842259 6207 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:33:51.843828 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 06:33:51.843887 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 06:33:51.843899 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 06:33:51.843928 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 06:33:51.843940 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 06:33:51.843956 6207 factory.go:656] Stopping watch factory\\\\nI0123 06:33:51.843974 6207 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 06:33:51.843996 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 06:33:51.844010 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 06:33:51.844022 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 06:33:51.843918 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 06:33:51.844048 6207 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"message\\\":\\\"match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d937b3b3-82c3-4791-9a66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0123 06:33:54.767321 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.957383 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.974262 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:56 crc kubenswrapper[4937]: I0123 06:33:56.988871 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.004132 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.009303 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.009345 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.009360 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.009382 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.009397 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:57Z","lastTransitionTime":"2026-01-23T06:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.018633 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.034644 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.051643 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.067962 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.084313 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f394d17-1f72-43ba-8d51-b76e56dd6849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7ksbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.099840 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.112709 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.112762 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.112773 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.112791 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.112803 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:57Z","lastTransitionTime":"2026-01-23T06:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.120170 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.143555 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.162463 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.176537 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.189264 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.201338 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.215711 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.215784 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.215802 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.215836 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.215857 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:57Z","lastTransitionTime":"2026-01-23T06:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.218432 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.239284 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.268473 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"message\\\":\\\"match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d937b3b3-82c3-4791-9a66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0123 06:33:54.767321 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.286951 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.320007 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.320054 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.320067 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.320087 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.320101 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:57Z","lastTransitionTime":"2026-01-23T06:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.346108 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs\") pod \"network-metrics-daemon-7ksbw\" (UID: \"5f394d17-1f72-43ba-8d51-b76e56dd6849\") " pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:33:57 crc kubenswrapper[4937]: E0123 06:33:57.346452 4937 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:33:57 crc kubenswrapper[4937]: E0123 06:33:57.346656 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs podName:5f394d17-1f72-43ba-8d51-b76e56dd6849 nodeName:}" failed. No retries permitted until 2026-01-23 06:33:58.34657167 +0000 UTC m=+38.150338363 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs") pod "network-metrics-daemon-7ksbw" (UID: "5f394d17-1f72-43ba-8d51-b76e56dd6849") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.423902 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.423975 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.423995 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.424023 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.424042 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:57Z","lastTransitionTime":"2026-01-23T06:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.487520 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 15:43:27.076915772 +0000 UTC Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.525570 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:33:57 crc kubenswrapper[4937]: E0123 06:33:57.525821 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.526618 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.527239 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:57 crc kubenswrapper[4937]: E0123 06:33:57.527486 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.527629 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.527873 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.528002 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.528142 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:57Z","lastTransitionTime":"2026-01-23T06:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.632321 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.632750 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.632949 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.633095 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.633389 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:57Z","lastTransitionTime":"2026-01-23T06:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.737313 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.737385 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.737403 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.737431 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.737456 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:57Z","lastTransitionTime":"2026-01-23T06:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.841273 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.841351 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.841374 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.841402 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.841419 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:57Z","lastTransitionTime":"2026-01-23T06:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.945075 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.945488 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.945730 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.945911 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:57 crc kubenswrapper[4937]: I0123 06:33:57.946120 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:57Z","lastTransitionTime":"2026-01-23T06:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.049723 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.049812 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.049837 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.049869 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.049894 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:58Z","lastTransitionTime":"2026-01-23T06:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.153254 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.153288 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.153298 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.153313 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.153323 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:58Z","lastTransitionTime":"2026-01-23T06:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.257632 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.257698 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.257711 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.257734 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.257754 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:58Z","lastTransitionTime":"2026-01-23T06:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.357123 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs\") pod \"network-metrics-daemon-7ksbw\" (UID: \"5f394d17-1f72-43ba-8d51-b76e56dd6849\") " pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:33:58 crc kubenswrapper[4937]: E0123 06:33:58.357367 4937 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:33:58 crc kubenswrapper[4937]: E0123 06:33:58.357471 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs podName:5f394d17-1f72-43ba-8d51-b76e56dd6849 nodeName:}" failed. No retries permitted until 2026-01-23 06:34:00.357438934 +0000 UTC m=+40.161205617 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs") pod "network-metrics-daemon-7ksbw" (UID: "5f394d17-1f72-43ba-8d51-b76e56dd6849") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.362413 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.362479 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.362498 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.362529 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.362556 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:58Z","lastTransitionTime":"2026-01-23T06:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.466299 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.466411 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.466437 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.466468 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.466490 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:58Z","lastTransitionTime":"2026-01-23T06:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.488383 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 18:26:19.924469474 +0000 UTC Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.525747 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.525747 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:33:58 crc kubenswrapper[4937]: E0123 06:33:58.525941 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:33:58 crc kubenswrapper[4937]: E0123 06:33:58.526200 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.570644 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.570709 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.570726 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.570753 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.570805 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:58Z","lastTransitionTime":"2026-01-23T06:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.675332 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.675407 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.675429 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.675463 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.675485 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:58Z","lastTransitionTime":"2026-01-23T06:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.779220 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.779549 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.779641 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.779716 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.779738 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:58Z","lastTransitionTime":"2026-01-23T06:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.884222 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.884314 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.884335 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.884365 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.884385 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:58Z","lastTransitionTime":"2026-01-23T06:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.987635 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.987716 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.987736 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.987766 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:58 crc kubenswrapper[4937]: I0123 06:33:58.987787 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:58Z","lastTransitionTime":"2026-01-23T06:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.090577 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.090695 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.090714 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.090769 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.090791 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:59Z","lastTransitionTime":"2026-01-23T06:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.096844 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.142827 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.169714 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.194104 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.194171 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.194187 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.194206 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.194216 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:59Z","lastTransitionTime":"2026-01-23T06:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.194740 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.210870 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.230372 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.243451 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.257527 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f394d17-1f72-43ba-8d51-b76e56dd6849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7ksbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.276460 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.297766 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.297824 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.297842 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.297873 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.297894 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:59Z","lastTransitionTime":"2026-01-23T06:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.317142 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.337231 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.351930 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.366372 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.384493 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.398355 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.401513 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.401571 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.401586 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.401628 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.401644 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:59Z","lastTransitionTime":"2026-01-23T06:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.416231 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.440065 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"message\\\":\\\"match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d937b3b3-82c3-4791-9a66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0123 06:33:54.767321 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.452515 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.489372 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 00:48:42.224461042 +0000 UTC Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.505185 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.505232 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.505244 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.505264 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.505278 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:59Z","lastTransitionTime":"2026-01-23T06:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.525310 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.525362 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:33:59 crc kubenswrapper[4937]: E0123 06:33:59.525448 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:33:59 crc kubenswrapper[4937]: E0123 06:33:59.525508 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.608371 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.608422 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.608435 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.608454 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.608467 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:59Z","lastTransitionTime":"2026-01-23T06:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.712217 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.712317 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.712344 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.712386 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.712415 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:59Z","lastTransitionTime":"2026-01-23T06:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.814863 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.814909 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.814921 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.814939 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.814949 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:59Z","lastTransitionTime":"2026-01-23T06:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.918016 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.918057 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.918070 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.918088 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:33:59 crc kubenswrapper[4937]: I0123 06:33:59.918098 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:33:59Z","lastTransitionTime":"2026-01-23T06:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.020902 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.020939 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.020950 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.020966 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.020975 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:00Z","lastTransitionTime":"2026-01-23T06:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.123434 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.123471 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.123481 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.123498 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.123509 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:00Z","lastTransitionTime":"2026-01-23T06:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.226382 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.226770 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.226871 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.226989 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.227085 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:00Z","lastTransitionTime":"2026-01-23T06:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.330808 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.330877 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.330901 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.330931 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.330947 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:00Z","lastTransitionTime":"2026-01-23T06:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.380674 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs\") pod \"network-metrics-daemon-7ksbw\" (UID: \"5f394d17-1f72-43ba-8d51-b76e56dd6849\") " pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:00 crc kubenswrapper[4937]: E0123 06:34:00.380993 4937 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:34:00 crc kubenswrapper[4937]: E0123 06:34:00.381146 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs podName:5f394d17-1f72-43ba-8d51-b76e56dd6849 nodeName:}" failed. No retries permitted until 2026-01-23 06:34:04.38111402 +0000 UTC m=+44.184880683 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs") pod "network-metrics-daemon-7ksbw" (UID: "5f394d17-1f72-43ba-8d51-b76e56dd6849") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.434196 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.434666 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.434867 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.435012 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.436704 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:00Z","lastTransitionTime":"2026-01-23T06:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.490092 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:03:17.771037598 +0000 UTC Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.525821 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:00 crc kubenswrapper[4937]: E0123 06:34:00.526049 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.526195 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:00 crc kubenswrapper[4937]: E0123 06:34:00.526351 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.539275 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.539311 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.539325 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.539346 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.539360 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:00Z","lastTransitionTime":"2026-01-23T06:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.545329 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.564108 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.585296 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f394d17-1f72-43ba-8d51-b76e56dd6849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7ksbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.607110 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.629796 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.646046 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.646139 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.646164 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.646199 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.646225 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:00Z","lastTransitionTime":"2026-01-23T06:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.656106 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.681759 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.700901 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.715010 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.726767 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.742883 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.749955 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.750248 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.750504 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.750844 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.751069 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:00Z","lastTransitionTime":"2026-01-23T06:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.769532 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.798343 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.832576 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"message\\\":\\\"match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d937b3b3-82c3-4791-9a66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0123 06:33:54.767321 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.852474 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.854018 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.854052 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.854067 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.854089 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.854106 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:00Z","lastTransitionTime":"2026-01-23T06:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.874001 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.891085 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.956413 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.956460 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.956478 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.956500 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:00 crc kubenswrapper[4937]: I0123 06:34:00.956518 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:00Z","lastTransitionTime":"2026-01-23T06:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.059563 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.059652 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.059666 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.059686 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.059703 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:01Z","lastTransitionTime":"2026-01-23T06:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.163319 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.163372 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.163392 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.163417 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.163434 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:01Z","lastTransitionTime":"2026-01-23T06:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.267748 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.267805 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.267817 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.267836 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.267851 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:01Z","lastTransitionTime":"2026-01-23T06:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.371718 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.371780 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.371801 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.371826 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.371844 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:01Z","lastTransitionTime":"2026-01-23T06:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.475348 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.475438 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.475462 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.475492 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.475511 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:01Z","lastTransitionTime":"2026-01-23T06:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.490925 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:33:08.56121403 +0000 UTC Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.525792 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:01 crc kubenswrapper[4937]: E0123 06:34:01.526003 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.526184 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:01 crc kubenswrapper[4937]: E0123 06:34:01.526544 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.578967 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.579026 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.579036 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.579055 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.579067 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:01Z","lastTransitionTime":"2026-01-23T06:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.682963 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.683410 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.683677 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.683892 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.684270 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:01Z","lastTransitionTime":"2026-01-23T06:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.787865 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.787911 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.787922 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.787941 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.787953 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:01Z","lastTransitionTime":"2026-01-23T06:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.890362 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.890406 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.890415 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.890433 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.890446 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:01Z","lastTransitionTime":"2026-01-23T06:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.997667 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.997720 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.997740 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.997768 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:01 crc kubenswrapper[4937]: I0123 06:34:01.997787 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:01Z","lastTransitionTime":"2026-01-23T06:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.101229 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.101286 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.101297 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.101314 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.101326 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:02Z","lastTransitionTime":"2026-01-23T06:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.204481 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.204771 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.204972 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.205126 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.205280 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:02Z","lastTransitionTime":"2026-01-23T06:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.308264 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.308331 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.308346 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.308373 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.308388 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:02Z","lastTransitionTime":"2026-01-23T06:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.412020 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.412105 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.412125 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.412155 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.412300 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:02Z","lastTransitionTime":"2026-01-23T06:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.492329 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 08:27:44.887166642 +0000 UTC Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.516428 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.516495 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.516506 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.516528 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.516540 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:02Z","lastTransitionTime":"2026-01-23T06:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.525352 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.525379 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:02 crc kubenswrapper[4937]: E0123 06:34:02.525630 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:02 crc kubenswrapper[4937]: E0123 06:34:02.525708 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.619360 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.619423 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.619441 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.619465 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.619483 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:02Z","lastTransitionTime":"2026-01-23T06:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.722501 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.722926 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.723128 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.723344 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.723526 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:02Z","lastTransitionTime":"2026-01-23T06:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.827333 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.827398 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.827410 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.827451 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.827467 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:02Z","lastTransitionTime":"2026-01-23T06:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.930764 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.930825 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.930834 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.930856 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:02 crc kubenswrapper[4937]: I0123 06:34:02.930872 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:02Z","lastTransitionTime":"2026-01-23T06:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.033617 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.033670 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.033710 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.033726 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.033738 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:03Z","lastTransitionTime":"2026-01-23T06:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.136573 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.136637 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.136655 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.136677 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.136689 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:03Z","lastTransitionTime":"2026-01-23T06:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.240546 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.240622 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.240639 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.240662 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.240678 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:03Z","lastTransitionTime":"2026-01-23T06:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.344114 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.344436 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.344587 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.344737 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.345075 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:03Z","lastTransitionTime":"2026-01-23T06:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.448157 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.448657 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.448786 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.448884 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.448972 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:03Z","lastTransitionTime":"2026-01-23T06:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.492567 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 02:26:48.272204345 +0000 UTC Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.526321 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:03 crc kubenswrapper[4937]: E0123 06:34:03.526487 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.526755 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:03 crc kubenswrapper[4937]: E0123 06:34:03.526995 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.551729 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.551770 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.551780 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.551795 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.551806 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:03Z","lastTransitionTime":"2026-01-23T06:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.654658 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.654709 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.654720 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.654739 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.654753 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:03Z","lastTransitionTime":"2026-01-23T06:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.757920 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.757977 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.757989 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.758009 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.758020 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:03Z","lastTransitionTime":"2026-01-23T06:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.842130 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.842179 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.842188 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.842205 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.842218 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:03Z","lastTransitionTime":"2026-01-23T06:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:03 crc kubenswrapper[4937]: E0123 06:34:03.860556 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:03Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.865667 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.865833 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.865896 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.865970 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.866055 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:03Z","lastTransitionTime":"2026-01-23T06:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:03 crc kubenswrapper[4937]: E0123 06:34:03.883866 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:03Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.890980 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.891056 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.891073 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.891103 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.891124 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:03Z","lastTransitionTime":"2026-01-23T06:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:03 crc kubenswrapper[4937]: E0123 06:34:03.907764 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:03Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.912553 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.912662 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.912683 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.912715 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.912741 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:03Z","lastTransitionTime":"2026-01-23T06:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:03 crc kubenswrapper[4937]: E0123 06:34:03.931126 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:03Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.942733 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.942773 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.942783 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.942800 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.942812 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:03Z","lastTransitionTime":"2026-01-23T06:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:03 crc kubenswrapper[4937]: E0123 06:34:03.957375 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:03Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:03 crc kubenswrapper[4937]: E0123 06:34:03.957546 4937 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.959546 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.959616 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.959629 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.959652 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:03 crc kubenswrapper[4937]: I0123 06:34:03.959666 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:03Z","lastTransitionTime":"2026-01-23T06:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.062652 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.062700 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.062711 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.062731 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.062743 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:04Z","lastTransitionTime":"2026-01-23T06:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.166430 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.166482 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.166495 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.166514 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.166532 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:04Z","lastTransitionTime":"2026-01-23T06:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.269490 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.269541 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.269556 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.269575 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.269617 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:04Z","lastTransitionTime":"2026-01-23T06:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.371973 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.372026 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.372042 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.372064 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.372080 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:04Z","lastTransitionTime":"2026-01-23T06:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.432144 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs\") pod \"network-metrics-daemon-7ksbw\" (UID: \"5f394d17-1f72-43ba-8d51-b76e56dd6849\") " pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:04 crc kubenswrapper[4937]: E0123 06:34:04.432394 4937 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:34:04 crc kubenswrapper[4937]: E0123 06:34:04.432469 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs podName:5f394d17-1f72-43ba-8d51-b76e56dd6849 nodeName:}" failed. No retries permitted until 2026-01-23 06:34:12.432448234 +0000 UTC m=+52.236214897 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs") pod "network-metrics-daemon-7ksbw" (UID: "5f394d17-1f72-43ba-8d51-b76e56dd6849") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.475634 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.475682 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.475698 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.475724 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.475741 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:04Z","lastTransitionTime":"2026-01-23T06:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.493186 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 11:29:39.884713714 +0000 UTC Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.525805 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.525828 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:04 crc kubenswrapper[4937]: E0123 06:34:04.525939 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:04 crc kubenswrapper[4937]: E0123 06:34:04.526029 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.579185 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.579542 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.579765 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.579976 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.580134 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:04Z","lastTransitionTime":"2026-01-23T06:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.683149 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.683203 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.683221 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.683248 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.683267 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:04Z","lastTransitionTime":"2026-01-23T06:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.786990 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.787068 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.787088 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.787114 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.787133 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:04Z","lastTransitionTime":"2026-01-23T06:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.890304 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.890384 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.890426 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.890484 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.890509 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:04Z","lastTransitionTime":"2026-01-23T06:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.993871 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.993939 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.993956 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.993990 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:04 crc kubenswrapper[4937]: I0123 06:34:04.994009 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:04Z","lastTransitionTime":"2026-01-23T06:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.097687 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.098064 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.098249 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.098393 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.098522 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:05Z","lastTransitionTime":"2026-01-23T06:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.202126 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.202187 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.202200 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.202225 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.202239 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:05Z","lastTransitionTime":"2026-01-23T06:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.305467 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.305907 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.306078 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.306283 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.306430 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:05Z","lastTransitionTime":"2026-01-23T06:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.409542 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.409786 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.409893 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.409988 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.410081 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:05Z","lastTransitionTime":"2026-01-23T06:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.493508 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 06:38:01.87404214 +0000 UTC Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.512760 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.512875 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.512902 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.512935 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.512961 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:05Z","lastTransitionTime":"2026-01-23T06:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.526501 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.526529 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:05 crc kubenswrapper[4937]: E0123 06:34:05.526822 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:05 crc kubenswrapper[4937]: E0123 06:34:05.527039 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.616871 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.616954 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.616974 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.617003 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.617025 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:05Z","lastTransitionTime":"2026-01-23T06:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.720645 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.720725 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.720744 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.720774 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.720793 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:05Z","lastTransitionTime":"2026-01-23T06:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.823831 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.823884 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.823898 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.823920 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.823938 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:05Z","lastTransitionTime":"2026-01-23T06:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.927271 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.927353 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.927371 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.927401 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:05 crc kubenswrapper[4937]: I0123 06:34:05.927422 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:05Z","lastTransitionTime":"2026-01-23T06:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.032408 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.032472 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.032492 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.032526 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.032549 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:06Z","lastTransitionTime":"2026-01-23T06:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.136835 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.136901 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.136918 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.136947 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.136966 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:06Z","lastTransitionTime":"2026-01-23T06:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.240550 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.240674 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.240735 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.240776 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.240804 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:06Z","lastTransitionTime":"2026-01-23T06:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.344146 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.344220 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.344238 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.344268 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.344288 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:06Z","lastTransitionTime":"2026-01-23T06:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.446880 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.446932 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.446944 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.446961 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.446984 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:06Z","lastTransitionTime":"2026-01-23T06:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.495065 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 18:04:53.492098361 +0000 UTC Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.525534 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:06 crc kubenswrapper[4937]: E0123 06:34:06.525783 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.525943 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:06 crc kubenswrapper[4937]: E0123 06:34:06.526181 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.550015 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.550075 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.550091 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.550114 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.550131 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:06Z","lastTransitionTime":"2026-01-23T06:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.653006 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.653061 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.653071 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.653091 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.653103 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:06Z","lastTransitionTime":"2026-01-23T06:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.755755 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.756061 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.756232 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.756386 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.756508 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:06Z","lastTransitionTime":"2026-01-23T06:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.859237 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.859302 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.859314 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.859336 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.859351 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:06Z","lastTransitionTime":"2026-01-23T06:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.962275 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.962325 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.962335 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.962354 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:06 crc kubenswrapper[4937]: I0123 06:34:06.962431 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:06Z","lastTransitionTime":"2026-01-23T06:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.065094 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.065685 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.065863 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.066012 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.066146 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:07Z","lastTransitionTime":"2026-01-23T06:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.169642 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.169715 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.169734 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.169765 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.169786 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:07Z","lastTransitionTime":"2026-01-23T06:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.272895 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.272957 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.272971 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.273000 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.273017 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:07Z","lastTransitionTime":"2026-01-23T06:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.376882 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.377351 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.377545 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.377814 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.377964 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:07Z","lastTransitionTime":"2026-01-23T06:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.480878 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.480937 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.480952 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.480975 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.480991 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:07Z","lastTransitionTime":"2026-01-23T06:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.496143 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 07:24:13.944272898 +0000 UTC Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.525975 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.526017 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:07 crc kubenswrapper[4937]: E0123 06:34:07.526193 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:07 crc kubenswrapper[4937]: E0123 06:34:07.526478 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.584270 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.584727 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.584904 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.585098 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.585220 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:07Z","lastTransitionTime":"2026-01-23T06:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.688996 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.689055 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.689066 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.689086 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.689099 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:07Z","lastTransitionTime":"2026-01-23T06:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.792640 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.792707 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.792726 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.792755 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.792776 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:07Z","lastTransitionTime":"2026-01-23T06:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.896346 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.896811 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.897001 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.897190 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:07 crc kubenswrapper[4937]: I0123 06:34:07.897361 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:07Z","lastTransitionTime":"2026-01-23T06:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.001461 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.001503 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.001512 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.001529 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.001545 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:08Z","lastTransitionTime":"2026-01-23T06:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.104698 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.104983 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.105046 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.105159 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.105230 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:08Z","lastTransitionTime":"2026-01-23T06:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.209434 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.209765 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.209872 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.209971 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.210094 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:08Z","lastTransitionTime":"2026-01-23T06:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.313457 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.313542 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.313567 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.313639 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.313669 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:08Z","lastTransitionTime":"2026-01-23T06:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.417726 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.418056 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.418199 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.418370 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.418708 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:08Z","lastTransitionTime":"2026-01-23T06:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.497186 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 18:40:00.44319531 +0000 UTC Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.522220 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.522276 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.522292 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.522322 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.522342 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:08Z","lastTransitionTime":"2026-01-23T06:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.526057 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:08 crc kubenswrapper[4937]: E0123 06:34:08.526267 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.526442 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:08 crc kubenswrapper[4937]: E0123 06:34:08.526816 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.625570 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.625645 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.625666 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.625693 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.625727 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:08Z","lastTransitionTime":"2026-01-23T06:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.729013 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.729073 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.729091 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.729119 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.729135 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:08Z","lastTransitionTime":"2026-01-23T06:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.832072 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.832124 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.832138 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.832159 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.832175 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:08Z","lastTransitionTime":"2026-01-23T06:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.937815 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.937878 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.937895 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.937917 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:08 crc kubenswrapper[4937]: I0123 06:34:08.937941 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:08Z","lastTransitionTime":"2026-01-23T06:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.041151 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.041698 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.042000 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.042214 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.042431 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:09Z","lastTransitionTime":"2026-01-23T06:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.145811 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.146308 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.146452 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.146638 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.146781 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:09Z","lastTransitionTime":"2026-01-23T06:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.250433 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.250902 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.251003 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.251123 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.251221 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:09Z","lastTransitionTime":"2026-01-23T06:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.354698 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.355283 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.355493 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.355853 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.356004 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:09Z","lastTransitionTime":"2026-01-23T06:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.459386 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.459828 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.459945 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.460048 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.460143 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:09Z","lastTransitionTime":"2026-01-23T06:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.499169 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 13:46:26.19969649 +0000 UTC Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.525709 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.526236 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:09 crc kubenswrapper[4937]: E0123 06:34:09.526423 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:09 crc kubenswrapper[4937]: E0123 06:34:09.526528 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.526811 4937 scope.go:117] "RemoveContainer" containerID="fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.563059 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.563097 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.563110 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.563128 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.563141 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:09Z","lastTransitionTime":"2026-01-23T06:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.667333 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.667405 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.667424 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.667453 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.667474 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:09Z","lastTransitionTime":"2026-01-23T06:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.770163 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.770212 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.770228 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.770248 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.770263 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:09Z","lastTransitionTime":"2026-01-23T06:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.873726 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.873777 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.873787 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.873804 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.873814 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:09Z","lastTransitionTime":"2026-01-23T06:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.977278 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.977324 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.977336 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.977356 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.977369 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:09Z","lastTransitionTime":"2026-01-23T06:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.987003 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovnkube-controller/1.log" Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.992476 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerStarted","Data":"0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14"} Jan 23 06:34:09 crc kubenswrapper[4937]: I0123 06:34:09.993092 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.020020 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.041890 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.055815 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.070446 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.080094 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.080144 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.080157 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.080181 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.080195 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:10Z","lastTransitionTime":"2026-01-23T06:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.082924 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.092811 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.109864 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.133838 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"message\\\":\\\"match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d937b3b3-82c3-4791-9a66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0123 06:33:54.767321 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.147947 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.165190 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.180395 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.182414 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.182439 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.182449 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.182469 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.182484 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:10Z","lastTransitionTime":"2026-01-23T06:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.197500 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.213934 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.227007 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.238225 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.248618 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.258628 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f394d17-1f72-43ba-8d51-b76e56dd6849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7ksbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.285687 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.285721 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.285732 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.285748 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.285759 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:10Z","lastTransitionTime":"2026-01-23T06:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.388769 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.388812 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.388823 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.388841 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.388851 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:10Z","lastTransitionTime":"2026-01-23T06:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.491846 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.491883 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.491893 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.491910 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.491925 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:10Z","lastTransitionTime":"2026-01-23T06:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.502134 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:57:48.708696914 +0000 UTC Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.525674 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.525758 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:10 crc kubenswrapper[4937]: E0123 06:34:10.525834 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:10 crc kubenswrapper[4937]: E0123 06:34:10.525948 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.548464 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.569945 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.587529 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.595709 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.595787 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.595807 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.595834 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.595853 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:10Z","lastTransitionTime":"2026-01-23T06:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.601101 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f394d17-1f72-43ba-8d51-b76e56dd6849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7ksbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.616827 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.635582 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.651012 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.667773 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.681730 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.695801 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.699025 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.699082 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.699096 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.699118 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.699132 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:10Z","lastTransitionTime":"2026-01-23T06:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.706860 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.719914 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.736150 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.742776 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.747020 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.758340 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.769797 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.793351 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"message\\\":\\\"match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d937b3b3-82c3-4791-9a66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0123 06:33:54.767321 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.802097 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.802141 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.802152 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.802170 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.802183 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:10Z","lastTransitionTime":"2026-01-23T06:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.806573 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.818194 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.829451 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"960e8fed-5482-4968-95c7-d3d93ad36ae0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756d28ec84c8944eedde49e0eff6ae5e99072adc64dd3ec2be4d6f0f00e2ab25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f47fdb9ac3672226eb5c068cf4c2a6b257a5bdb05132033db65bf2cf237d63ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0e3c35a5dbb71b99af942ba049be90010dcbb07494e82e0a333751d53f02e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.844696 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.862346 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.874943 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.894524 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.904659 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.904716 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.904728 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.904746 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.904760 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:10Z","lastTransitionTime":"2026-01-23T06:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.909705 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.921360 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.932542 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.943157 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f394d17-1f72-43ba-8d51-b76e56dd6849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7ksbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.953556 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:10 crc kubenswrapper[4937]: I0123 06:34:10.972177 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:10Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.002859 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:11Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.007128 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.007164 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.007177 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.007198 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.007212 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:11Z","lastTransitionTime":"2026-01-23T06:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.020968 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:11Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.036922 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:11Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.051352 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:11Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.070537 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:11Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.104010 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"message\\\":\\\"match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d937b3b3-82c3-4791-9a66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0123 06:33:54.767321 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:11Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.110145 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.110214 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.110239 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.110266 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.110284 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:11Z","lastTransitionTime":"2026-01-23T06:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.212915 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.212952 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.212961 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.212977 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.212988 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:11Z","lastTransitionTime":"2026-01-23T06:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.315754 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.315815 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.315829 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.315853 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.315868 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:11Z","lastTransitionTime":"2026-01-23T06:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.419100 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.419142 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.419154 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.419172 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.419183 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:11Z","lastTransitionTime":"2026-01-23T06:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.503323 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 17:17:09.198588211 +0000 UTC Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.522049 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.522137 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.522159 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.522189 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.522213 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:11Z","lastTransitionTime":"2026-01-23T06:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.525357 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.525415 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:11 crc kubenswrapper[4937]: E0123 06:34:11.525585 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:11 crc kubenswrapper[4937]: E0123 06:34:11.525753 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.625056 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.625109 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.625119 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.625140 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.625152 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:11Z","lastTransitionTime":"2026-01-23T06:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.728822 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.728894 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.728914 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.728944 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.728966 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:11Z","lastTransitionTime":"2026-01-23T06:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.832198 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.832255 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.832268 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.832288 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.832302 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:11Z","lastTransitionTime":"2026-01-23T06:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.935850 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.935937 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.935965 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.935996 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:11 crc kubenswrapper[4937]: I0123 06:34:11.936016 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:11Z","lastTransitionTime":"2026-01-23T06:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.004166 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovnkube-controller/2.log" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.005522 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovnkube-controller/1.log" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.012860 4937 generic.go:334] "Generic (PLEG): container finished" podID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerID="0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14" exitCode=1 Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.012941 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerDied","Data":"0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14"} Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.013067 4937 scope.go:117] "RemoveContainer" containerID="fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.014372 4937 scope.go:117] "RemoveContainer" containerID="0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14" Jan 23 06:34:12 crc kubenswrapper[4937]: E0123 06:34:12.014764 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.036246 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.038826 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.039021 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.039066 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.039098 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.039119 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:12Z","lastTransitionTime":"2026-01-23T06:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.048764 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.065100 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.081019 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f394d17-1f72-43ba-8d51-b76e56dd6849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7ksbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.096956 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.114682 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.114907 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:12 crc kubenswrapper[4937]: E0123 06:34:12.114933 4937 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:34:12 crc kubenswrapper[4937]: E0123 06:34:12.115145 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:34:44.115100454 +0000 UTC m=+83.918867147 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:34:12 crc kubenswrapper[4937]: E0123 06:34:12.115235 4937 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:34:12 crc kubenswrapper[4937]: E0123 06:34:12.115393 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:34:44.115347721 +0000 UTC m=+83.919114414 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.116096 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.134172 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.142305 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.142371 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.142390 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.142418 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.142437 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:12Z","lastTransitionTime":"2026-01-23T06:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.150137 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.169224 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.184629 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.198626 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.216541 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:34:12 crc kubenswrapper[4937]: E0123 06:34:12.216825 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:34:44.216784943 +0000 UTC m=+84.020551636 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.216969 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.217034 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:12 crc kubenswrapper[4937]: E0123 06:34:12.217281 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:34:12 crc kubenswrapper[4937]: E0123 06:34:12.217357 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:34:12 crc kubenswrapper[4937]: E0123 06:34:12.217300 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:34:12 crc kubenswrapper[4937]: E0123 06:34:12.217379 4937 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:34:12 crc kubenswrapper[4937]: E0123 06:34:12.217407 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:34:12 crc kubenswrapper[4937]: E0123 06:34:12.217436 4937 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:34:12 crc kubenswrapper[4937]: E0123 06:34:12.217482 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 06:34:44.217452832 +0000 UTC m=+84.021219495 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:34:12 crc kubenswrapper[4937]: E0123 06:34:12.217521 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 06:34:44.217497653 +0000 UTC m=+84.021264356 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.222114 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.244582 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.246426 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.246489 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.246505 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.246527 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.246544 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:12Z","lastTransitionTime":"2026-01-23T06:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.273280 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"message\\\":\\\"match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d937b3b3-82c3-4791-9a66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0123 06:33:54.767321 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"message\\\":\\\"flector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739142 6575 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739188 6575 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739252 6575 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:34:10.739289 6575 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:34:10.739375 6575 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739437 6575 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739266 6575 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.740665 6575 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 06:34:10.740715 6575 factory.go:656] Stopping watch factory\\\\nI0123 06:34:10.740739 6575 ovnkube.go:599] Stopped ovnkube\\\\nI0123 06:34:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.287873 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.304043 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.317650 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"960e8fed-5482-4968-95c7-d3d93ad36ae0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756d28ec84c8944eedde49e0eff6ae5e99072adc64dd3ec2be4d6f0f00e2ab25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f47fdb9ac3672226eb5c068cf4c2a6b257a5bdb05132033db65bf2cf237d63ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0e3c35a5dbb71b99af942ba049be90010dcbb07494e82e0a333751d53f02e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.331088 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.349686 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.349732 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.349746 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.349767 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.349781 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:12Z","lastTransitionTime":"2026-01-23T06:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.452399 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.452449 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.452467 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.452492 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.452509 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:12Z","lastTransitionTime":"2026-01-23T06:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.504018 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 13:49:19.105408105 +0000 UTC Jan 23 06:34:12 crc kubenswrapper[4937]: E0123 06:34:12.520132 4937 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:34:12 crc kubenswrapper[4937]: E0123 06:34:12.520228 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs podName:5f394d17-1f72-43ba-8d51-b76e56dd6849 nodeName:}" failed. No retries permitted until 2026-01-23 06:34:28.520203952 +0000 UTC m=+68.323970645 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs") pod "network-metrics-daemon-7ksbw" (UID: "5f394d17-1f72-43ba-8d51-b76e56dd6849") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.519950 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs\") pod \"network-metrics-daemon-7ksbw\" (UID: \"5f394d17-1f72-43ba-8d51-b76e56dd6849\") " pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.530486 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:12 crc kubenswrapper[4937]: E0123 06:34:12.531200 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.531450 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:12 crc kubenswrapper[4937]: E0123 06:34:12.531741 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.558001 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.558063 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.558079 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.558106 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.558122 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:12Z","lastTransitionTime":"2026-01-23T06:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.661250 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.661333 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.661360 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.661401 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.661432 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:12Z","lastTransitionTime":"2026-01-23T06:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.764572 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.764646 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.764660 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.764708 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.764724 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:12Z","lastTransitionTime":"2026-01-23T06:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.867723 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.867765 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.867780 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.867797 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.867811 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:12Z","lastTransitionTime":"2026-01-23T06:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.971432 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.971492 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.971574 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.971647 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:12 crc kubenswrapper[4937]: I0123 06:34:12.971676 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:12Z","lastTransitionTime":"2026-01-23T06:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.018366 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovnkube-controller/2.log" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.075471 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.075549 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.075569 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.075641 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.075670 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:13Z","lastTransitionTime":"2026-01-23T06:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.179637 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.180455 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.180544 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.180650 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.180736 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:13Z","lastTransitionTime":"2026-01-23T06:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.283508 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.283642 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.283668 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.283709 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.283735 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:13Z","lastTransitionTime":"2026-01-23T06:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.386731 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.387095 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.387162 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.387244 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.387312 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:13Z","lastTransitionTime":"2026-01-23T06:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.490182 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.490255 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.490272 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.490299 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.490319 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:13Z","lastTransitionTime":"2026-01-23T06:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.504345 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 02:02:03.226393549 +0000 UTC Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.526092 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.526155 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:13 crc kubenswrapper[4937]: E0123 06:34:13.526256 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:13 crc kubenswrapper[4937]: E0123 06:34:13.526371 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.593363 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.593431 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.593454 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.593489 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.593515 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:13Z","lastTransitionTime":"2026-01-23T06:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.697484 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.697547 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.697572 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.697632 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.697662 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:13Z","lastTransitionTime":"2026-01-23T06:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.801452 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.801520 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.801535 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.801557 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.801577 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:13Z","lastTransitionTime":"2026-01-23T06:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.912876 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.912961 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.912985 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.913027 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:13 crc kubenswrapper[4937]: I0123 06:34:13.913052 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:13Z","lastTransitionTime":"2026-01-23T06:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.015863 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.015920 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.015933 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.015953 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.015968 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:14Z","lastTransitionTime":"2026-01-23T06:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.119762 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.119837 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.119859 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.119888 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.119907 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:14Z","lastTransitionTime":"2026-01-23T06:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.222992 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.223064 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.223082 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.223111 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.223132 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:14Z","lastTransitionTime":"2026-01-23T06:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.326127 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.326215 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.326233 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.326262 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.326283 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:14Z","lastTransitionTime":"2026-01-23T06:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.356696 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.356826 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.356853 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.356890 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.356915 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:14Z","lastTransitionTime":"2026-01-23T06:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:14 crc kubenswrapper[4937]: E0123 06:34:14.380354 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:14Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.385929 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.386001 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.386024 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.386054 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.386077 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:14Z","lastTransitionTime":"2026-01-23T06:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:14 crc kubenswrapper[4937]: E0123 06:34:14.409515 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:14Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.415478 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.415541 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.415557 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.415585 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.415626 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:14Z","lastTransitionTime":"2026-01-23T06:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:14 crc kubenswrapper[4937]: E0123 06:34:14.436400 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:14Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.441068 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.441145 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.441165 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.441200 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.441222 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:14Z","lastTransitionTime":"2026-01-23T06:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:14 crc kubenswrapper[4937]: E0123 06:34:14.461102 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:14Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.466304 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.466361 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.466382 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.466412 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.466431 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:14Z","lastTransitionTime":"2026-01-23T06:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:14 crc kubenswrapper[4937]: E0123 06:34:14.487420 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:14Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:14 crc kubenswrapper[4937]: E0123 06:34:14.487680 4937 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.489692 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.489748 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.489781 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.489817 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.489841 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:14Z","lastTransitionTime":"2026-01-23T06:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.505318 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 19:09:17.97625807 +0000 UTC Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.525934 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:14 crc kubenswrapper[4937]: E0123 06:34:14.526102 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.525935 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:14 crc kubenswrapper[4937]: E0123 06:34:14.526348 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.594160 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.594267 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.594298 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.594337 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.594357 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:14Z","lastTransitionTime":"2026-01-23T06:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.697700 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.697742 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.697752 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.697768 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.697778 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:14Z","lastTransitionTime":"2026-01-23T06:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.800311 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.800346 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.800358 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.800379 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.800390 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:14Z","lastTransitionTime":"2026-01-23T06:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.902867 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.902913 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.902925 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.902945 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:14 crc kubenswrapper[4937]: I0123 06:34:14.902959 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:14Z","lastTransitionTime":"2026-01-23T06:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.006517 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.006635 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.006657 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.006687 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.006711 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:15Z","lastTransitionTime":"2026-01-23T06:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.110644 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.110731 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.110756 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.110789 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.110811 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:15Z","lastTransitionTime":"2026-01-23T06:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.213737 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.213795 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.213808 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.213831 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.213845 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:15Z","lastTransitionTime":"2026-01-23T06:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.316655 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.316713 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.316724 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.316743 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.316756 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:15Z","lastTransitionTime":"2026-01-23T06:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.418996 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.419050 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.419065 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.419088 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.419104 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:15Z","lastTransitionTime":"2026-01-23T06:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.506001 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 05:59:19.169276805 +0000 UTC Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.522400 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.522459 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.522501 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.522525 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.522538 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:15Z","lastTransitionTime":"2026-01-23T06:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.525827 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.525835 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:15 crc kubenswrapper[4937]: E0123 06:34:15.526039 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:15 crc kubenswrapper[4937]: E0123 06:34:15.526241 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.625364 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.625411 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.625421 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.625438 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.625450 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:15Z","lastTransitionTime":"2026-01-23T06:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.728712 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.728766 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.728774 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.728793 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.728804 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:15Z","lastTransitionTime":"2026-01-23T06:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.832706 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.832798 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.832822 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.832858 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.832884 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:15Z","lastTransitionTime":"2026-01-23T06:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.936863 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.936933 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.936951 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.936978 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:15 crc kubenswrapper[4937]: I0123 06:34:15.936996 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:15Z","lastTransitionTime":"2026-01-23T06:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.038908 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.038954 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.038963 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.038980 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.038991 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:16Z","lastTransitionTime":"2026-01-23T06:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.141462 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.141509 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.141517 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.141535 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.141549 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:16Z","lastTransitionTime":"2026-01-23T06:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.245051 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.245100 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.245107 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.245124 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.245135 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:16Z","lastTransitionTime":"2026-01-23T06:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.348008 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.348048 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.348059 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.348080 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.348091 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:16Z","lastTransitionTime":"2026-01-23T06:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.451988 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.452043 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.452063 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.452089 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.452108 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:16Z","lastTransitionTime":"2026-01-23T06:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.507079 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 14:02:51.67998007 +0000 UTC Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.525535 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.525535 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:16 crc kubenswrapper[4937]: E0123 06:34:16.525785 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:16 crc kubenswrapper[4937]: E0123 06:34:16.525898 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.555205 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.555293 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.555365 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.555394 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.555420 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:16Z","lastTransitionTime":"2026-01-23T06:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.658940 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.659012 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.659030 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.659056 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.659076 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:16Z","lastTransitionTime":"2026-01-23T06:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.762793 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.762875 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.762895 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.762932 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.762954 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:16Z","lastTransitionTime":"2026-01-23T06:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.866196 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.866246 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.866260 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.866282 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.866296 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:16Z","lastTransitionTime":"2026-01-23T06:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.970194 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.970527 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.970648 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.970771 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:16 crc kubenswrapper[4937]: I0123 06:34:16.970867 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:16Z","lastTransitionTime":"2026-01-23T06:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.073783 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.074109 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.074196 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.074329 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.074413 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:17Z","lastTransitionTime":"2026-01-23T06:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.177565 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.177683 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.177709 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.177745 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.177769 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:17Z","lastTransitionTime":"2026-01-23T06:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.279919 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.279981 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.279992 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.280006 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.280017 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:17Z","lastTransitionTime":"2026-01-23T06:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.382877 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.382960 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.382984 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.383019 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.383083 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:17Z","lastTransitionTime":"2026-01-23T06:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.485779 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.485836 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.485848 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.485870 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.485912 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:17Z","lastTransitionTime":"2026-01-23T06:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.508017 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 07:09:27.361636412 +0000 UTC Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.525248 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.525386 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:17 crc kubenswrapper[4937]: E0123 06:34:17.525395 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:17 crc kubenswrapper[4937]: E0123 06:34:17.525730 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.589311 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.589388 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.589406 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.589459 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.589477 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:17Z","lastTransitionTime":"2026-01-23T06:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.692571 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.692656 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.692669 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.692693 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.692710 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:17Z","lastTransitionTime":"2026-01-23T06:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.796238 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.796283 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.796295 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.796315 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.796330 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:17Z","lastTransitionTime":"2026-01-23T06:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.898622 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.898682 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.898698 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.898727 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:17 crc kubenswrapper[4937]: I0123 06:34:17.898744 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:17Z","lastTransitionTime":"2026-01-23T06:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.001780 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.001853 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.001864 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.001882 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.001894 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:18Z","lastTransitionTime":"2026-01-23T06:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.104413 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.104490 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.104510 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.104539 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.104558 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:18Z","lastTransitionTime":"2026-01-23T06:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.208669 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.208720 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.208731 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.208751 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.208765 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:18Z","lastTransitionTime":"2026-01-23T06:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.312326 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.312640 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.312743 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.312833 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.312939 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:18Z","lastTransitionTime":"2026-01-23T06:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.415471 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.415717 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.415807 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.415871 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.415926 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:18Z","lastTransitionTime":"2026-01-23T06:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.509168 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 06:28:28.360846799 +0000 UTC Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.518218 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.518252 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.518264 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.518284 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.518295 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:18Z","lastTransitionTime":"2026-01-23T06:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.525743 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.525744 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:18 crc kubenswrapper[4937]: E0123 06:34:18.526637 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:18 crc kubenswrapper[4937]: E0123 06:34:18.526504 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.620799 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.620844 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.620856 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.620874 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.620888 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:18Z","lastTransitionTime":"2026-01-23T06:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.723123 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.723175 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.723189 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.723207 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.723221 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:18Z","lastTransitionTime":"2026-01-23T06:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.830510 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.830835 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.830899 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.831026 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.831105 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:18Z","lastTransitionTime":"2026-01-23T06:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.934152 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.934461 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.934548 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.934671 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:18 crc kubenswrapper[4937]: I0123 06:34:18.934761 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:18Z","lastTransitionTime":"2026-01-23T06:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.037792 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.037854 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.037875 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.037901 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.037921 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:19Z","lastTransitionTime":"2026-01-23T06:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.141185 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.141262 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.141275 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.141299 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.141315 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:19Z","lastTransitionTime":"2026-01-23T06:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.244191 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.244300 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.244321 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.244349 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.244370 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:19Z","lastTransitionTime":"2026-01-23T06:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.347584 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.347667 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.347693 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.347716 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.347731 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:19Z","lastTransitionTime":"2026-01-23T06:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.450563 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.450631 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.450643 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.450663 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.450675 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:19Z","lastTransitionTime":"2026-01-23T06:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.509586 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:41:10.286732467 +0000 UTC Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.525963 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.526045 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:19 crc kubenswrapper[4937]: E0123 06:34:19.526138 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:19 crc kubenswrapper[4937]: E0123 06:34:19.526403 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.553381 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.553804 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.554133 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.554254 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.554363 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:19Z","lastTransitionTime":"2026-01-23T06:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.657371 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.657450 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.657474 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.657510 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.657535 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:19Z","lastTransitionTime":"2026-01-23T06:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.763977 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.764025 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.764038 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.764056 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.764069 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:19Z","lastTransitionTime":"2026-01-23T06:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.866937 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.867001 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.867020 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.867045 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.867065 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:19Z","lastTransitionTime":"2026-01-23T06:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.970685 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.970772 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.970798 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.970835 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:19 crc kubenswrapper[4937]: I0123 06:34:19.970867 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:19Z","lastTransitionTime":"2026-01-23T06:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.074235 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.074299 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.074315 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.074339 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.074359 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:20Z","lastTransitionTime":"2026-01-23T06:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.178401 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.178466 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.178482 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.178507 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.178526 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:20Z","lastTransitionTime":"2026-01-23T06:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.281979 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.282057 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.282078 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.282109 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.282137 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:20Z","lastTransitionTime":"2026-01-23T06:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.385523 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.385645 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.385663 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.385693 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.385710 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:20Z","lastTransitionTime":"2026-01-23T06:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.488805 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.488870 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.488891 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.488917 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.488935 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:20Z","lastTransitionTime":"2026-01-23T06:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.511004 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 14:14:51.038171457 +0000 UTC Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.525884 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:20 crc kubenswrapper[4937]: E0123 06:34:20.526108 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.526442 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:20 crc kubenswrapper[4937]: E0123 06:34:20.526672 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.544669 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"960e8fed-5482-4968-95c7-d3d93ad36ae0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756d28ec84c8944eedde49e0eff6ae5e99072adc64dd3ec2be4d6f0f00e2ab25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f47fdb9ac3672226eb5c068cf4c2a6b257a5bdb05132033db65bf2cf237d63ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0e3c35a5dbb71b99af942ba049be90010dcbb07494e82e0a333751d53f02e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.564886 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.579111 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.591983 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.592026 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.592035 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.592051 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.592065 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:20Z","lastTransitionTime":"2026-01-23T06:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.592630 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.607368 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.619911 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.630102 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.643295 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.654430 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f394d17-1f72-43ba-8d51-b76e56dd6849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7ksbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.671245 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.691488 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.694131 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.694170 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.694182 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.694199 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.694210 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:20Z","lastTransitionTime":"2026-01-23T06:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.705457 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.717636 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.729632 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.740872 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.752199 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.771212 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe872d66b55ddfb6d7ec05ea107973a0e9e3189e0b705946229ac9db6f473a20\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"message\\\":\\\"match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d937b3b3-82c3-4791-9a66-41b9fed53e9d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0123 06:33:54.767321 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"message\\\":\\\"flector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739142 6575 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739188 6575 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739252 6575 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:34:10.739289 6575 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:34:10.739375 6575 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739437 6575 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739266 6575 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.740665 6575 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 06:34:10.740715 6575 factory.go:656] Stopping watch factory\\\\nI0123 06:34:10.740739 6575 ovnkube.go:599] Stopped ovnkube\\\\nI0123 06:34:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:34:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.783758 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:20Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.796586 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.796649 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.796665 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.796684 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.796697 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:20Z","lastTransitionTime":"2026-01-23T06:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.899404 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.899460 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.899473 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.899495 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:20 crc kubenswrapper[4937]: I0123 06:34:20.899510 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:20Z","lastTransitionTime":"2026-01-23T06:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.003171 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.003382 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.003393 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.003408 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.003418 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:21Z","lastTransitionTime":"2026-01-23T06:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.105712 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.105753 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.105763 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.105780 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.105791 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:21Z","lastTransitionTime":"2026-01-23T06:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.208654 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.208713 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.208724 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.208749 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.208764 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:21Z","lastTransitionTime":"2026-01-23T06:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.312163 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.312213 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.312224 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.312246 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.312258 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:21Z","lastTransitionTime":"2026-01-23T06:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.415477 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.415541 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.415559 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.415584 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.415628 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:21Z","lastTransitionTime":"2026-01-23T06:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.511806 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 09:07:24.272065641 +0000 UTC Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.518749 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.518824 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.518878 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.518912 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.518931 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:21Z","lastTransitionTime":"2026-01-23T06:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.525774 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.525851 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:21 crc kubenswrapper[4937]: E0123 06:34:21.525967 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:21 crc kubenswrapper[4937]: E0123 06:34:21.526081 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.622902 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.622980 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.622998 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.623028 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.623045 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:21Z","lastTransitionTime":"2026-01-23T06:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.726774 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.726852 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.726876 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.726909 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.726931 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:21Z","lastTransitionTime":"2026-01-23T06:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.830571 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.830635 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.830645 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.830664 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.830679 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:21Z","lastTransitionTime":"2026-01-23T06:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.933678 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.933729 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.933739 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.933758 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:21 crc kubenswrapper[4937]: I0123 06:34:21.933770 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:21Z","lastTransitionTime":"2026-01-23T06:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.038220 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.038271 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.038283 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.038303 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.038314 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:22Z","lastTransitionTime":"2026-01-23T06:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.142866 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.142965 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.142993 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.143031 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.143053 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:22Z","lastTransitionTime":"2026-01-23T06:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.247397 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.247446 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.247458 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.247479 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.247494 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:22Z","lastTransitionTime":"2026-01-23T06:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.350243 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.350284 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.350295 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.350312 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.350325 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:22Z","lastTransitionTime":"2026-01-23T06:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.453896 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.453968 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.453990 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.454013 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.454028 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:22Z","lastTransitionTime":"2026-01-23T06:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.512818 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:57:31.822007047 +0000 UTC Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.597949 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.597710 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.598234 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:22 crc kubenswrapper[4937]: E0123 06:34:22.598377 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:22 crc kubenswrapper[4937]: E0123 06:34:22.598511 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:22 crc kubenswrapper[4937]: E0123 06:34:22.598738 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.599647 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.599716 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.599743 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.599777 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.599802 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:22Z","lastTransitionTime":"2026-01-23T06:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.703000 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.703066 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.703087 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.703124 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.703150 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:22Z","lastTransitionTime":"2026-01-23T06:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.805821 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.805879 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.805892 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.805913 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.805929 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:22Z","lastTransitionTime":"2026-01-23T06:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.909458 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.909510 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.909523 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.909542 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:22 crc kubenswrapper[4937]: I0123 06:34:22.909556 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:22Z","lastTransitionTime":"2026-01-23T06:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.011863 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.011938 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.011963 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.012000 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.012024 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:23Z","lastTransitionTime":"2026-01-23T06:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.114920 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.114992 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.115010 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.115037 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.115055 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:23Z","lastTransitionTime":"2026-01-23T06:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.223843 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.223913 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.223924 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.223947 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.223961 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:23Z","lastTransitionTime":"2026-01-23T06:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.327249 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.327322 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.327346 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.327380 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.327406 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:23Z","lastTransitionTime":"2026-01-23T06:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.431173 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.431247 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.431266 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.431292 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.431312 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:23Z","lastTransitionTime":"2026-01-23T06:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.513416 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 09:37:44.170231955 +0000 UTC Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.526292 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:23 crc kubenswrapper[4937]: E0123 06:34:23.526538 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.527410 4937 scope.go:117] "RemoveContainer" containerID="0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14" Jan 23 06:34:23 crc kubenswrapper[4937]: E0123 06:34:23.527849 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.534445 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.534495 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.534512 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.534536 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.534556 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:23Z","lastTransitionTime":"2026-01-23T06:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.547404 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.564974 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.579908 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.592197 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.609138 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.620338 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f394d17-1f72-43ba-8d51-b76e56dd6849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7ksbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.634795 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.638343 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.638386 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.638401 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.638419 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.638432 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:23Z","lastTransitionTime":"2026-01-23T06:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.667253 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.688355 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.708538 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.728138 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.741521 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.741583 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.741611 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.741634 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.741646 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:23Z","lastTransitionTime":"2026-01-23T06:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.744205 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.763684 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.797123 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"message\\\":\\\"flector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739142 6575 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739188 6575 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739252 6575 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:34:10.739289 6575 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:34:10.739375 6575 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739437 6575 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739266 6575 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.740665 6575 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 06:34:10.740715 6575 factory.go:656] Stopping watch factory\\\\nI0123 06:34:10.740739 6575 ovnkube.go:599] Stopped ovnkube\\\\nI0123 06:34:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:34:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.814383 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.834403 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"960e8fed-5482-4968-95c7-d3d93ad36ae0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756d28ec84c8944eedde49e0eff6ae5e99072adc64dd3ec2be4d6f0f00e2ab25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f47fdb9ac3672226eb5c068cf4c2a6b257a5bdb05132033db65bf2cf237d63ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0e3c35a5dbb71b99af942ba049be90010dcbb07494e82e0a333751d53f02e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.845972 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.846021 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.846034 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.846056 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.846074 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:23Z","lastTransitionTime":"2026-01-23T06:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.852951 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.870393 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:23Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.950162 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.950249 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.950270 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.950298 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:23 crc kubenswrapper[4937]: I0123 06:34:23.950318 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:23Z","lastTransitionTime":"2026-01-23T06:34:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.053950 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.054024 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.054039 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.054085 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.054103 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:24Z","lastTransitionTime":"2026-01-23T06:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.157731 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.157828 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.157846 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.157903 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.157920 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:24Z","lastTransitionTime":"2026-01-23T06:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.261319 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.261384 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.261396 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.261417 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.261432 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:24Z","lastTransitionTime":"2026-01-23T06:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.364217 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.364271 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.364280 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.364302 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.364314 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:24Z","lastTransitionTime":"2026-01-23T06:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.467240 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.467308 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.467320 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.467338 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.467350 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:24Z","lastTransitionTime":"2026-01-23T06:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.513712 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:25:59.858093532 +0000 UTC Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.526337 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.526419 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.526439 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:24 crc kubenswrapper[4937]: E0123 06:34:24.526798 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:24 crc kubenswrapper[4937]: E0123 06:34:24.526971 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:24 crc kubenswrapper[4937]: E0123 06:34:24.527147 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.570438 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.570499 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.570513 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.570535 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.570549 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:24Z","lastTransitionTime":"2026-01-23T06:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.677393 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.677451 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.677465 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.677486 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.677504 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:24Z","lastTransitionTime":"2026-01-23T06:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.717148 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.717192 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.717225 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.717243 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.717254 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:24Z","lastTransitionTime":"2026-01-23T06:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:24 crc kubenswrapper[4937]: E0123 06:34:24.731554 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.737069 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.737106 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.737115 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.737128 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.737136 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:24Z","lastTransitionTime":"2026-01-23T06:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:24 crc kubenswrapper[4937]: E0123 06:34:24.748585 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.753959 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.754024 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.754038 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.754065 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.754082 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:24Z","lastTransitionTime":"2026-01-23T06:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:24 crc kubenswrapper[4937]: E0123 06:34:24.770811 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.775363 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.775401 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.775412 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.775432 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.775446 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:24Z","lastTransitionTime":"2026-01-23T06:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:24 crc kubenswrapper[4937]: E0123 06:34:24.788572 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.793397 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.793441 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.793455 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.793477 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.793489 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:24Z","lastTransitionTime":"2026-01-23T06:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:24 crc kubenswrapper[4937]: E0123 06:34:24.806223 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:24Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:24 crc kubenswrapper[4937]: E0123 06:34:24.806482 4937 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.808175 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.808210 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.808219 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.808234 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.808245 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:24Z","lastTransitionTime":"2026-01-23T06:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.911099 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.911149 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.911167 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.911201 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:24 crc kubenswrapper[4937]: I0123 06:34:24.911217 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:24Z","lastTransitionTime":"2026-01-23T06:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.014850 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.014935 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.014963 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.014993 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.015012 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:25Z","lastTransitionTime":"2026-01-23T06:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.117997 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.118045 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.118060 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.118083 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.118106 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:25Z","lastTransitionTime":"2026-01-23T06:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.221297 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.221358 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.221370 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.221391 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.221404 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:25Z","lastTransitionTime":"2026-01-23T06:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.324155 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.324199 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.324209 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.324229 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.324242 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:25Z","lastTransitionTime":"2026-01-23T06:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.427208 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.427255 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.427268 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.427287 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.427298 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:25Z","lastTransitionTime":"2026-01-23T06:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.514451 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:56:30.702839115 +0000 UTC Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.525798 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:25 crc kubenswrapper[4937]: E0123 06:34:25.525947 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.529772 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.529801 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.529813 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.529825 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.529834 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:25Z","lastTransitionTime":"2026-01-23T06:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.632279 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.632326 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.632338 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.632358 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.632373 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:25Z","lastTransitionTime":"2026-01-23T06:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.735750 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.735799 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.735809 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.735828 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.735839 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:25Z","lastTransitionTime":"2026-01-23T06:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.838471 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.838516 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.838525 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.838544 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.838556 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:25Z","lastTransitionTime":"2026-01-23T06:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.941390 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.941448 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.941457 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.941473 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:25 crc kubenswrapper[4937]: I0123 06:34:25.941486 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:25Z","lastTransitionTime":"2026-01-23T06:34:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.044114 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.044496 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.044506 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.044523 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.044564 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:26Z","lastTransitionTime":"2026-01-23T06:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.147153 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.147221 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.147233 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.147253 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.147272 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:26Z","lastTransitionTime":"2026-01-23T06:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.250072 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.250110 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.250121 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.250138 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.250149 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:26Z","lastTransitionTime":"2026-01-23T06:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.353450 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.353495 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.353507 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.353525 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.353538 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:26Z","lastTransitionTime":"2026-01-23T06:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.457069 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.457111 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.457119 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.457136 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.457148 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:26Z","lastTransitionTime":"2026-01-23T06:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.514663 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:30:34.98114212 +0000 UTC Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.526136 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:26 crc kubenswrapper[4937]: E0123 06:34:26.526295 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.526147 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.526376 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:26 crc kubenswrapper[4937]: E0123 06:34:26.526398 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:26 crc kubenswrapper[4937]: E0123 06:34:26.526731 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.559986 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.563739 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.563765 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.563818 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.563839 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:26Z","lastTransitionTime":"2026-01-23T06:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.667754 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.667832 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.667851 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.667882 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.667904 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:26Z","lastTransitionTime":"2026-01-23T06:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.769831 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.769864 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.769872 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.769887 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.769899 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:26Z","lastTransitionTime":"2026-01-23T06:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.872817 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.872872 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.872884 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.872906 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.872921 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:26Z","lastTransitionTime":"2026-01-23T06:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.976317 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.976392 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.976406 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.976433 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:26 crc kubenswrapper[4937]: I0123 06:34:26.976449 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:26Z","lastTransitionTime":"2026-01-23T06:34:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.079052 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.079102 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.079120 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.079143 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.079161 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:27Z","lastTransitionTime":"2026-01-23T06:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.181877 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.181915 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.181925 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.181940 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.181949 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:27Z","lastTransitionTime":"2026-01-23T06:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.284850 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.284914 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.284927 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.284943 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.284954 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:27Z","lastTransitionTime":"2026-01-23T06:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.388331 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.388376 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.388394 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.388416 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.388435 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:27Z","lastTransitionTime":"2026-01-23T06:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.491234 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.491277 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.491286 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.491303 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.491314 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:27Z","lastTransitionTime":"2026-01-23T06:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.515640 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:46:06.326437487 +0000 UTC Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.526028 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:27 crc kubenswrapper[4937]: E0123 06:34:27.526186 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.594727 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.594805 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.594894 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.594922 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.595024 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:27Z","lastTransitionTime":"2026-01-23T06:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.699097 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.699149 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.699179 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.699198 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.699208 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:27Z","lastTransitionTime":"2026-01-23T06:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.802584 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.802667 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.802679 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.802706 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.802722 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:27Z","lastTransitionTime":"2026-01-23T06:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.905327 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.905384 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.905399 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.905424 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:27 crc kubenswrapper[4937]: I0123 06:34:27.905443 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:27Z","lastTransitionTime":"2026-01-23T06:34:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.008651 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.008714 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.008726 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.008748 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.008763 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:28Z","lastTransitionTime":"2026-01-23T06:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.112086 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.112153 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.112171 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.112199 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.112216 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:28Z","lastTransitionTime":"2026-01-23T06:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.215214 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.215265 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.215279 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.215305 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.215321 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:28Z","lastTransitionTime":"2026-01-23T06:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.318078 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.318122 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.318132 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.318148 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.318162 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:28Z","lastTransitionTime":"2026-01-23T06:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.421468 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.421520 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.421531 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.421548 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.421565 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:28Z","lastTransitionTime":"2026-01-23T06:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.516089 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:58:24.984095614 +0000 UTC Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.524908 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.524971 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.524982 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.525002 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.525013 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:28Z","lastTransitionTime":"2026-01-23T06:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.525413 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.525627 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.525818 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:28 crc kubenswrapper[4937]: E0123 06:34:28.526013 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:28 crc kubenswrapper[4937]: E0123 06:34:28.526220 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:28 crc kubenswrapper[4937]: E0123 06:34:28.526431 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.541009 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.570454 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs\") pod \"network-metrics-daemon-7ksbw\" (UID: \"5f394d17-1f72-43ba-8d51-b76e56dd6849\") " pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:28 crc kubenswrapper[4937]: E0123 06:34:28.572024 4937 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:34:28 crc kubenswrapper[4937]: E0123 06:34:28.572189 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs podName:5f394d17-1f72-43ba-8d51-b76e56dd6849 nodeName:}" failed. No retries permitted until 2026-01-23 06:35:00.572152373 +0000 UTC m=+100.375919196 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs") pod "network-metrics-daemon-7ksbw" (UID: "5f394d17-1f72-43ba-8d51-b76e56dd6849") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.628397 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.628465 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.628481 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.628503 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.628520 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:28Z","lastTransitionTime":"2026-01-23T06:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.731331 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.731392 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.731402 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.731421 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.731433 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:28Z","lastTransitionTime":"2026-01-23T06:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.834684 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.834767 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.834810 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.834834 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.834852 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:28Z","lastTransitionTime":"2026-01-23T06:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.938158 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.938197 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.938206 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.938225 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:28 crc kubenswrapper[4937]: I0123 06:34:28.938239 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:28Z","lastTransitionTime":"2026-01-23T06:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.041770 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.041807 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.041816 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.041831 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.041841 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:29Z","lastTransitionTime":"2026-01-23T06:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.144333 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.144761 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.144885 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.145019 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.145115 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:29Z","lastTransitionTime":"2026-01-23T06:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.248186 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.248235 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.248246 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.248264 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.248276 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:29Z","lastTransitionTime":"2026-01-23T06:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.351336 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.351384 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.351395 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.351414 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.351427 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:29Z","lastTransitionTime":"2026-01-23T06:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.454895 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.454961 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.454971 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.454987 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.455005 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:29Z","lastTransitionTime":"2026-01-23T06:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.516618 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 17:08:54.475236087 +0000 UTC Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.526084 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:29 crc kubenswrapper[4937]: E0123 06:34:29.526528 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.558565 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.558794 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.558817 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.558888 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.558908 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:29Z","lastTransitionTime":"2026-01-23T06:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.662123 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.662171 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.662182 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.662198 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.662211 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:29Z","lastTransitionTime":"2026-01-23T06:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.766304 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.766366 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.766380 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.766400 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.766415 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:29Z","lastTransitionTime":"2026-01-23T06:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.869398 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.869445 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.869474 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.869493 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.869505 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:29Z","lastTransitionTime":"2026-01-23T06:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.972466 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.972748 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.972838 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.972915 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:29 crc kubenswrapper[4937]: I0123 06:34:29.972984 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:29Z","lastTransitionTime":"2026-01-23T06:34:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.076528 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.076819 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.076898 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.076976 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.077036 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:30Z","lastTransitionTime":"2026-01-23T06:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.085493 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhj54_ddcbbc37-6ac2-41e5-a7ea-04de9284c50a/kube-multus/0.log" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.085651 4937 generic.go:334] "Generic (PLEG): container finished" podID="ddcbbc37-6ac2-41e5-a7ea-04de9284c50a" containerID="755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753" exitCode=1 Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.085752 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bhj54" event={"ID":"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a","Type":"ContainerDied","Data":"755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753"} Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.086325 4937 scope.go:117] "RemoveContainer" containerID="755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.100314 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.110552 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f394d17-1f72-43ba-8d51-b76e56dd6849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7ksbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.124218 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.140651 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.156305 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.171543 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.179809 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.179850 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.179861 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.179881 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.179895 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:30Z","lastTransitionTime":"2026-01-23T06:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.188982 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.201166 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.212454 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.229259 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.243196 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e704e2b6-be50-4568-9c97-db210296428c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f72971b4ea11d800db77e586767034c2d68959f1f0d41c81b01c68eb75b28230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.266490 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.283415 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.283456 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.283469 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.283485 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.283496 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:30Z","lastTransitionTime":"2026-01-23T06:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.284354 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.300185 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.319684 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"message\\\":\\\"flector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739142 6575 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739188 6575 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739252 6575 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:34:10.739289 6575 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:34:10.739375 6575 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739437 6575 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739266 6575 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.740665 6575 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 06:34:10.740715 6575 factory.go:656] Stopping watch factory\\\\nI0123 06:34:10.740739 6575 ovnkube.go:599] Stopped ovnkube\\\\nI0123 06:34:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:34:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.334441 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.347666 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"960e8fed-5482-4968-95c7-d3d93ad36ae0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756d28ec84c8944eedde49e0eff6ae5e99072adc64dd3ec2be4d6f0f00e2ab25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f47fdb9ac3672226eb5c068cf4c2a6b257a5bdb05132033db65bf2cf237d63ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0e3c35a5dbb71b99af942ba049be90010dcbb07494e82e0a333751d53f02e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.362502 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.375675 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:29Z\\\",\\\"message\\\":\\\"2026-01-23T06:33:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cd3c9ef5-cf2e-43fb-8dc2-6cb03990223d\\\\n2026-01-23T06:33:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cd3c9ef5-cf2e-43fb-8dc2-6cb03990223d to /host/opt/cni/bin/\\\\n2026-01-23T06:33:44Z [verbose] multus-daemon started\\\\n2026-01-23T06:33:44Z [verbose] Readiness Indicator file check\\\\n2026-01-23T06:34:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.386544 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.386575 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.386585 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.386619 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.386630 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:30Z","lastTransitionTime":"2026-01-23T06:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.489563 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.489608 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.489618 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.489633 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.489644 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:30Z","lastTransitionTime":"2026-01-23T06:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.517327 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:34:09.456005848 +0000 UTC Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.525816 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:30 crc kubenswrapper[4937]: E0123 06:34:30.525942 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.526183 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:30 crc kubenswrapper[4937]: E0123 06:34:30.526367 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.526441 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:30 crc kubenswrapper[4937]: E0123 06:34:30.526482 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.540498 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f394d17-1f72-43ba-8d51-b76e56dd6849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7ksbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.554225 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.568448 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.590141 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.595912 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.595970 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.595981 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.596000 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.596013 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:30Z","lastTransitionTime":"2026-01-23T06:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.624071 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.647932 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.674918 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.687475 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.699086 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.699137 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.699147 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.699168 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.699182 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:30Z","lastTransitionTime":"2026-01-23T06:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.706329 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.720273 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e704e2b6-be50-4568-9c97-db210296428c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f72971b4ea11d800db77e586767034c2d68959f1f0d41c81b01c68eb75b28230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.741157 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.766941 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.780918 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.793654 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.801901 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.801932 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.801941 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.801956 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.801965 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:30Z","lastTransitionTime":"2026-01-23T06:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.816744 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"message\\\":\\\"flector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739142 6575 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739188 6575 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739252 6575 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:34:10.739289 6575 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:34:10.739375 6575 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739437 6575 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739266 6575 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.740665 6575 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 06:34:10.740715 6575 factory.go:656] Stopping watch factory\\\\nI0123 06:34:10.740739 6575 ovnkube.go:599] Stopped ovnkube\\\\nI0123 06:34:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:34:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.829470 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.840796 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"960e8fed-5482-4968-95c7-d3d93ad36ae0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756d28ec84c8944eedde49e0eff6ae5e99072adc64dd3ec2be4d6f0f00e2ab25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f47fdb9ac3672226eb5c068cf4c2a6b257a5bdb05132033db65bf2cf237d63ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0e3c35a5dbb71b99af942ba049be90010dcbb07494e82e0a333751d53f02e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.852742 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.866065 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:29Z\\\",\\\"message\\\":\\\"2026-01-23T06:33:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cd3c9ef5-cf2e-43fb-8dc2-6cb03990223d\\\\n2026-01-23T06:33:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cd3c9ef5-cf2e-43fb-8dc2-6cb03990223d to /host/opt/cni/bin/\\\\n2026-01-23T06:33:44Z [verbose] multus-daemon started\\\\n2026-01-23T06:33:44Z [verbose] Readiness Indicator file check\\\\n2026-01-23T06:34:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:30Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.904567 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.904632 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.904644 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.904663 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:30 crc kubenswrapper[4937]: I0123 06:34:30.904675 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:30Z","lastTransitionTime":"2026-01-23T06:34:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.006720 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.007080 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.007170 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.007271 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.007365 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:31Z","lastTransitionTime":"2026-01-23T06:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.091107 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhj54_ddcbbc37-6ac2-41e5-a7ea-04de9284c50a/kube-multus/0.log" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.091477 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bhj54" event={"ID":"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a","Type":"ContainerStarted","Data":"46a429144d0552298786ee2b19d9340a29a8558aefcf75a274c2c3e3bbab20ed"} Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.105510 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"960e8fed-5482-4968-95c7-d3d93ad36ae0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756d28ec84c8944eedde49e0eff6ae5e99072adc64dd3ec2be4d6f0f00e2ab25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f47fdb9ac3672226eb5c068cf4c2a6b257a5bdb05132033db65bf2cf237d63ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0e3c35a5dbb71b99af942ba049be90010dcbb07494e82e0a333751d53f02e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.114366 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.114403 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.114411 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.114426 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.114437 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:31Z","lastTransitionTime":"2026-01-23T06:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.116749 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.127807 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a429144d0552298786ee2b19d9340a29a8558aefcf75a274c2c3e3bbab20ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:29Z\\\",\\\"message\\\":\\\"2026-01-23T06:33:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cd3c9ef5-cf2e-43fb-8dc2-6cb03990223d\\\\n2026-01-23T06:33:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cd3c9ef5-cf2e-43fb-8dc2-6cb03990223d to /host/opt/cni/bin/\\\\n2026-01-23T06:33:44Z [verbose] multus-daemon started\\\\n2026-01-23T06:33:44Z [verbose] Readiness Indicator file check\\\\n2026-01-23T06:34:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.137980 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.149748 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.163332 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f394d17-1f72-43ba-8d51-b76e56dd6849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7ksbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.177332 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.196981 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.210940 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.219588 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.219656 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.219667 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.219684 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.219695 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:31Z","lastTransitionTime":"2026-01-23T06:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.222953 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.235237 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.246401 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.257803 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.275323 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.289572 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e704e2b6-be50-4568-9c97-db210296428c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f72971b4ea11d800db77e586767034c2d68959f1f0d41c81b01c68eb75b28230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.314231 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.321870 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.321918 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.321928 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.321944 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.321957 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:31Z","lastTransitionTime":"2026-01-23T06:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.329848 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.349367 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"message\\\":\\\"flector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739142 6575 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739188 6575 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739252 6575 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:34:10.739289 6575 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:34:10.739375 6575 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739437 6575 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739266 6575 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.740665 6575 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 06:34:10.740715 6575 factory.go:656] Stopping watch factory\\\\nI0123 06:34:10.740739 6575 ovnkube.go:599] Stopped ovnkube\\\\nI0123 06:34:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:34:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.362147 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:31Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.424253 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.424316 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.424329 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.424348 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.424361 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:31Z","lastTransitionTime":"2026-01-23T06:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.517709 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 04:32:23.018665218 +0000 UTC Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.525989 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:31 crc kubenswrapper[4937]: E0123 06:34:31.526174 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.526907 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.526941 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.526952 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.526967 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.526979 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:31Z","lastTransitionTime":"2026-01-23T06:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.630125 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.630183 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.630196 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.630212 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.630222 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:31Z","lastTransitionTime":"2026-01-23T06:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.732275 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.732342 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.732380 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.732400 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.732415 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:31Z","lastTransitionTime":"2026-01-23T06:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.835052 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.835106 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.835122 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.835141 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.835152 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:31Z","lastTransitionTime":"2026-01-23T06:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.937464 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.937530 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.937539 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.937562 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:31 crc kubenswrapper[4937]: I0123 06:34:31.937573 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:31Z","lastTransitionTime":"2026-01-23T06:34:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.040688 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.040735 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.040746 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.040763 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.040776 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:32Z","lastTransitionTime":"2026-01-23T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.144107 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.144155 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.144165 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.144191 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.144204 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:32Z","lastTransitionTime":"2026-01-23T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.247751 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.247804 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.247813 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.247832 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.247845 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:32Z","lastTransitionTime":"2026-01-23T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.350734 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.350792 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.350804 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.350826 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.350838 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:32Z","lastTransitionTime":"2026-01-23T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.454778 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.454867 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.454881 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.454902 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.454915 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:32Z","lastTransitionTime":"2026-01-23T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.518054 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:44:58.387012191 +0000 UTC Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.552385 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.552448 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.552415 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.552385 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:32 crc kubenswrapper[4937]: E0123 06:34:32.552569 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:32 crc kubenswrapper[4937]: E0123 06:34:32.552720 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:32 crc kubenswrapper[4937]: E0123 06:34:32.552793 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:32 crc kubenswrapper[4937]: E0123 06:34:32.552861 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.557439 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.557479 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.557495 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.557512 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.557522 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:32Z","lastTransitionTime":"2026-01-23T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.659651 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.659688 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.659697 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.659718 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.659731 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:32Z","lastTransitionTime":"2026-01-23T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.761832 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.761887 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.761898 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.761919 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.761930 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:32Z","lastTransitionTime":"2026-01-23T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.864067 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.864103 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.864112 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.864126 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.864138 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:32Z","lastTransitionTime":"2026-01-23T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.967391 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.967447 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.967457 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.967478 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:32 crc kubenswrapper[4937]: I0123 06:34:32.967497 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:32Z","lastTransitionTime":"2026-01-23T06:34:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.070639 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.070716 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.070736 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.070764 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.070785 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:33Z","lastTransitionTime":"2026-01-23T06:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.173806 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.174178 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.174254 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.174353 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.174447 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:33Z","lastTransitionTime":"2026-01-23T06:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.277173 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.277224 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.277234 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.277248 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.277277 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:33Z","lastTransitionTime":"2026-01-23T06:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.379551 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.379879 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.379897 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.379915 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.379929 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:33Z","lastTransitionTime":"2026-01-23T06:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.483432 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.483776 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.483840 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.483928 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.484004 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:33Z","lastTransitionTime":"2026-01-23T06:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.518814 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 06:21:02.097762604 +0000 UTC Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.587192 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.587241 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.587250 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.587266 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.587276 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:33Z","lastTransitionTime":"2026-01-23T06:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.698715 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.699441 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.699579 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.699657 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.699679 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:33Z","lastTransitionTime":"2026-01-23T06:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.803157 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.803210 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.803222 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.803240 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.803251 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:33Z","lastTransitionTime":"2026-01-23T06:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.906032 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.906095 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.906113 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.906140 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:33 crc kubenswrapper[4937]: I0123 06:34:33.906158 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:33Z","lastTransitionTime":"2026-01-23T06:34:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.009278 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.009334 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.009346 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.009364 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.009377 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:34Z","lastTransitionTime":"2026-01-23T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.111206 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.111248 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.111257 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.111274 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.111285 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:34Z","lastTransitionTime":"2026-01-23T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.214517 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.214563 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.214575 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.214613 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.214628 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:34Z","lastTransitionTime":"2026-01-23T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.317715 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.317762 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.317772 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.317792 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.317805 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:34Z","lastTransitionTime":"2026-01-23T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.421116 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.421200 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.421226 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.421261 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.421291 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:34Z","lastTransitionTime":"2026-01-23T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.519837 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 05:34:03.530854781 +0000 UTC Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.524798 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.524852 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.524864 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.524883 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.524894 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:34Z","lastTransitionTime":"2026-01-23T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.525294 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.525376 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.525389 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:34 crc kubenswrapper[4937]: E0123 06:34:34.525502 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.525608 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:34 crc kubenswrapper[4937]: E0123 06:34:34.525674 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:34 crc kubenswrapper[4937]: E0123 06:34:34.525765 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:34 crc kubenswrapper[4937]: E0123 06:34:34.525781 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.628246 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.628351 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.628375 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.628411 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.628438 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:34Z","lastTransitionTime":"2026-01-23T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.732205 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.732295 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.732318 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.732345 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.732367 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:34Z","lastTransitionTime":"2026-01-23T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.836062 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.836136 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.836154 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.836181 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.836202 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:34Z","lastTransitionTime":"2026-01-23T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.933080 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.933155 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.933174 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.933202 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.933220 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:34Z","lastTransitionTime":"2026-01-23T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:34 crc kubenswrapper[4937]: E0123 06:34:34.956688 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:34Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.961871 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.961960 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.962037 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.962076 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.962239 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:34Z","lastTransitionTime":"2026-01-23T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:34 crc kubenswrapper[4937]: E0123 06:34:34.987211 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:34Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.992184 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.992233 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.992246 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.992265 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:34 crc kubenswrapper[4937]: I0123 06:34:34.992276 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:34Z","lastTransitionTime":"2026-01-23T06:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:35 crc kubenswrapper[4937]: E0123 06:34:35.007706 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:35Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.011245 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.011293 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.011318 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.011344 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.011359 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:35Z","lastTransitionTime":"2026-01-23T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:35 crc kubenswrapper[4937]: E0123 06:34:35.029240 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:35Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.032635 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.032671 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.032682 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.032697 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.032732 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:35Z","lastTransitionTime":"2026-01-23T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:35 crc kubenswrapper[4937]: E0123 06:34:35.047474 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:35Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:35 crc kubenswrapper[4937]: E0123 06:34:35.047610 4937 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.049223 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.049267 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.049282 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.049302 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.049315 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:35Z","lastTransitionTime":"2026-01-23T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.151699 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.151740 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.151749 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.151766 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.151779 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:35Z","lastTransitionTime":"2026-01-23T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.254043 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.254085 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.254098 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.254113 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.254123 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:35Z","lastTransitionTime":"2026-01-23T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.356613 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.356678 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.356690 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.356707 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.356717 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:35Z","lastTransitionTime":"2026-01-23T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.459899 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.459979 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.460029 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.460054 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.460073 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:35Z","lastTransitionTime":"2026-01-23T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.520578 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:31:33.173586922 +0000 UTC Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.563991 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.564350 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.564869 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.565450 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.566036 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:35Z","lastTransitionTime":"2026-01-23T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.668829 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.668888 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.668905 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.668930 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.668948 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:35Z","lastTransitionTime":"2026-01-23T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.779531 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.779692 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.779717 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.779930 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.780006 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:35Z","lastTransitionTime":"2026-01-23T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.883644 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.883930 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.883986 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.884022 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.884042 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:35Z","lastTransitionTime":"2026-01-23T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.987536 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.987859 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.988053 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.988281 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:35 crc kubenswrapper[4937]: I0123 06:34:35.988763 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:35Z","lastTransitionTime":"2026-01-23T06:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.092925 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.093013 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.093032 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.093057 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.093076 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:36Z","lastTransitionTime":"2026-01-23T06:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.196790 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.196856 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.196873 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.196900 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.196919 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:36Z","lastTransitionTime":"2026-01-23T06:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.300191 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.300227 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.300234 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.300249 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.300259 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:36Z","lastTransitionTime":"2026-01-23T06:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.404309 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.404359 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.404372 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.404412 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.404442 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:36Z","lastTransitionTime":"2026-01-23T06:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.508560 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.508642 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.508653 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.508674 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.508687 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:36Z","lastTransitionTime":"2026-01-23T06:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.521768 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 10:28:30.336867938 +0000 UTC Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.526242 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.526321 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:36 crc kubenswrapper[4937]: E0123 06:34:36.526389 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.526418 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:36 crc kubenswrapper[4937]: E0123 06:34:36.526497 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:36 crc kubenswrapper[4937]: E0123 06:34:36.526625 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.526820 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:36 crc kubenswrapper[4937]: E0123 06:34:36.527146 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.527518 4937 scope.go:117] "RemoveContainer" containerID="0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.612344 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.612789 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.613004 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.613635 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.613847 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:36Z","lastTransitionTime":"2026-01-23T06:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.717168 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.717633 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.717713 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.717778 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.717856 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:36Z","lastTransitionTime":"2026-01-23T06:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.822471 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.822537 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.822556 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.822579 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.822610 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:36Z","lastTransitionTime":"2026-01-23T06:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.926019 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.926106 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.926130 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.926163 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:36 crc kubenswrapper[4937]: I0123 06:34:36.926194 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:36Z","lastTransitionTime":"2026-01-23T06:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.034823 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.035302 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.035321 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.035351 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.035370 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:37Z","lastTransitionTime":"2026-01-23T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.115130 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovnkube-controller/2.log" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.117563 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerStarted","Data":"4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41"} Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.118608 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.129654 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f394d17-1f72-43ba-8d51-b76e56dd6849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7ksbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.137750 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.137777 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.137786 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.137829 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.137855 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:37Z","lastTransitionTime":"2026-01-23T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.145036 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.165507 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.179726 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.208292 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.222984 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.240830 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.240915 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.240942 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.240975 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.241003 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:37Z","lastTransitionTime":"2026-01-23T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.243542 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.260217 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.277772 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.291565 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e704e2b6-be50-4568-9c97-db210296428c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f72971b4ea11d800db77e586767034c2d68959f1f0d41c81b01c68eb75b28230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.315494 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.340007 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.343937 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.343981 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.343994 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.344016 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.344029 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:37Z","lastTransitionTime":"2026-01-23T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.364663 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.391790 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.420476 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"message\\\":\\\"flector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739142 6575 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739188 6575 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739252 6575 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:34:10.739289 6575 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:34:10.739375 6575 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739437 6575 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739266 6575 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.740665 6575 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 06:34:10.740715 6575 factory.go:656] Stopping watch factory\\\\nI0123 06:34:10.740739 6575 ovnkube.go:599] Stopped ovnkube\\\\nI0123 06:34:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:34:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.434939 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.447059 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.447102 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.447112 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.447128 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.447139 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:37Z","lastTransitionTime":"2026-01-23T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.447494 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"960e8fed-5482-4968-95c7-d3d93ad36ae0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756d28ec84c8944eedde49e0eff6ae5e99072adc64dd3ec2be4d6f0f00e2ab25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f47fdb9ac3672226eb5c068cf4c2a6b257a5bdb05132033db65bf2cf237d63ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0e3c35a5dbb71b99af942ba049be90010dcbb07494e82e0a333751d53f02e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.462603 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.476641 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a429144d0552298786ee2b19d9340a29a8558aefcf75a274c2c3e3bbab20ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:29Z\\\",\\\"message\\\":\\\"2026-01-23T06:33:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cd3c9ef5-cf2e-43fb-8dc2-6cb03990223d\\\\n2026-01-23T06:33:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cd3c9ef5-cf2e-43fb-8dc2-6cb03990223d to /host/opt/cni/bin/\\\\n2026-01-23T06:33:44Z [verbose] multus-daemon started\\\\n2026-01-23T06:33:44Z [verbose] Readiness Indicator file check\\\\n2026-01-23T06:34:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:37Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.522107 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 06:09:42.704106464 +0000 UTC Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.549196 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.549249 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.549268 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.549288 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.549302 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:37Z","lastTransitionTime":"2026-01-23T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.652104 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.652160 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.652177 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.652191 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.652200 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:37Z","lastTransitionTime":"2026-01-23T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.754799 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.754854 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.754864 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.754884 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.754900 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:37Z","lastTransitionTime":"2026-01-23T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.857474 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.857516 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.857526 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.857541 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.857561 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:37Z","lastTransitionTime":"2026-01-23T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.959988 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.960029 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.960040 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.960056 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:37 crc kubenswrapper[4937]: I0123 06:34:37.960066 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:37Z","lastTransitionTime":"2026-01-23T06:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.062826 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.062898 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.062917 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.062945 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.062964 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:38Z","lastTransitionTime":"2026-01-23T06:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.123395 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovnkube-controller/3.log" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.124366 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovnkube-controller/2.log" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.127010 4937 generic.go:334] "Generic (PLEG): container finished" podID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerID="4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41" exitCode=1 Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.127074 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerDied","Data":"4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41"} Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.127138 4937 scope.go:117] "RemoveContainer" containerID="0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.128024 4937 scope.go:117] "RemoveContainer" containerID="4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41" Jan 23 06:34:38 crc kubenswrapper[4937]: E0123 06:34:38.128249 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.149719 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"960e8fed-5482-4968-95c7-d3d93ad36ae0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756d28ec84c8944eedde49e0eff6ae5e99072adc64dd3ec2be4d6f0f00e2ab25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f47fdb9ac3672226eb5c068cf4c2a6b257a5bdb05132033db65bf2cf237d63ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0e3c35a5dbb71b99af942ba049be90010dcbb07494e82e0a333751d53f02e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:38Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.163538 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:38Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.166043 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.166076 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.166088 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.166107 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.166122 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:38Z","lastTransitionTime":"2026-01-23T06:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.179397 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a429144d0552298786ee2b19d9340a29a8558aefcf75a274c2c3e3bbab20ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:29Z\\\",\\\"message\\\":\\\"2026-01-23T06:33:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cd3c9ef5-cf2e-43fb-8dc2-6cb03990223d\\\\n2026-01-23T06:33:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cd3c9ef5-cf2e-43fb-8dc2-6cb03990223d to /host/opt/cni/bin/\\\\n2026-01-23T06:33:44Z [verbose] multus-daemon started\\\\n2026-01-23T06:33:44Z [verbose] Readiness Indicator file check\\\\n2026-01-23T06:34:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:38Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.193257 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:38Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.205724 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:38Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.217655 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:38Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.226850 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:38Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.242520 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:38Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.259454 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f394d17-1f72-43ba-8d51-b76e56dd6849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7ksbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:38Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.269261 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.269309 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.269320 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.269340 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.269354 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:38Z","lastTransitionTime":"2026-01-23T06:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.273556 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:38Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.292865 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:38Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.306879 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e704e2b6-be50-4568-9c97-db210296428c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f72971b4ea11d800db77e586767034c2d68959f1f0d41c81b01c68eb75b28230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:38Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.326922 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:38Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.342254 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:38Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.354052 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:38Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.370187 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:38Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.372001 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.372053 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.372071 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.372097 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.372117 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:38Z","lastTransitionTime":"2026-01-23T06:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.385326 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:38Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.405726 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0234d10ae806b988ff2e7082c0419898d274e97fd4fa4ea3e217f3bcffeb4c14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"message\\\":\\\"flector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739142 6575 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739188 6575 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739252 6575 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:34:10.739289 6575 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 06:34:10.739375 6575 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739437 6575 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.739266 6575 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 06:34:10.740665 6575 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 06:34:10.740715 6575 factory.go:656] Stopping watch factory\\\\nI0123 06:34:10.740739 6575 ovnkube.go:599] Stopped ovnkube\\\\nI0123 06:34:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:34:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:38Z\\\",\\\"message\\\":\\\"[]services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.140\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0123 06:34:37.904014 6975 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:34:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:38Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.421318 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:38Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.475390 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.475455 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.475467 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.475489 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.475505 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:38Z","lastTransitionTime":"2026-01-23T06:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.522666 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 12:41:40.610929332 +0000 UTC Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.525998 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.526060 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.526099 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:38 crc kubenswrapper[4937]: E0123 06:34:38.526208 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.526283 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:38 crc kubenswrapper[4937]: E0123 06:34:38.526443 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:38 crc kubenswrapper[4937]: E0123 06:34:38.526651 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:38 crc kubenswrapper[4937]: E0123 06:34:38.526767 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.578611 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.578669 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.578682 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.578707 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.578722 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:38Z","lastTransitionTime":"2026-01-23T06:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.681388 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.681431 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.681441 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.681458 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.681471 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:38Z","lastTransitionTime":"2026-01-23T06:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.785353 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.785426 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.785473 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.785508 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.785530 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:38Z","lastTransitionTime":"2026-01-23T06:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.893321 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.893393 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.893414 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.893447 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.893471 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:38Z","lastTransitionTime":"2026-01-23T06:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.997468 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.997532 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.997546 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.997569 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:38 crc kubenswrapper[4937]: I0123 06:34:38.997587 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:38Z","lastTransitionTime":"2026-01-23T06:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.101197 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.101254 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.101263 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.101281 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.101292 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:39Z","lastTransitionTime":"2026-01-23T06:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.134435 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovnkube-controller/3.log" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.140671 4937 scope.go:117] "RemoveContainer" containerID="4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41" Jan 23 06:34:39 crc kubenswrapper[4937]: E0123 06:34:39.140938 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.159360 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:39Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.175725 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:39Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.195920 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:39Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.204651 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.204711 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.204728 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.204753 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.204772 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:39Z","lastTransitionTime":"2026-01-23T06:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.210350 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e704e2b6-be50-4568-9c97-db210296428c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f72971b4ea11d800db77e586767034c2d68959f1f0d41c81b01c68eb75b28230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:39Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.237130 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:39Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.253415 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:39Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.266847 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:39Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.282709 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:39Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.308544 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.308691 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.308773 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.308807 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.308898 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:39Z","lastTransitionTime":"2026-01-23T06:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.313634 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:38Z\\\",\\\"message\\\":\\\"[]services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.140\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0123 06:34:37.904014 6975 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:34:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:39Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.329485 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:39Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.347668 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"960e8fed-5482-4968-95c7-d3d93ad36ae0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756d28ec84c8944eedde49e0eff6ae5e99072adc64dd3ec2be4d6f0f00e2ab25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f47fdb9ac3672226eb5c068cf4c2a6b257a5bdb05132033db65bf2cf237d63ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0e3c35a5dbb71b99af942ba049be90010dcbb07494e82e0a333751d53f02e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:39Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.365389 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:39Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.380339 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a429144d0552298786ee2b19d9340a29a8558aefcf75a274c2c3e3bbab20ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:29Z\\\",\\\"message\\\":\\\"2026-01-23T06:33:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cd3c9ef5-cf2e-43fb-8dc2-6cb03990223d\\\\n2026-01-23T06:33:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cd3c9ef5-cf2e-43fb-8dc2-6cb03990223d to /host/opt/cni/bin/\\\\n2026-01-23T06:33:44Z [verbose] multus-daemon started\\\\n2026-01-23T06:33:44Z [verbose] Readiness Indicator file check\\\\n2026-01-23T06:34:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:39Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.399148 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f394d17-1f72-43ba-8d51-b76e56dd6849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7ksbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:39Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.412330 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.412366 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.412377 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.412397 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.412409 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:39Z","lastTransitionTime":"2026-01-23T06:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.417518 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:39Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.435199 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:39Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.449761 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:39Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.463575 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:39Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.479651 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:39Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.515235 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.515313 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.515323 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.515341 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.515355 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:39Z","lastTransitionTime":"2026-01-23T06:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.523582 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 13:29:52.798526547 +0000 UTC Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.619140 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.619221 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.619243 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.619278 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.619308 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:39Z","lastTransitionTime":"2026-01-23T06:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.723177 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.723259 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.723281 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.723332 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.723349 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:39Z","lastTransitionTime":"2026-01-23T06:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.826340 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.826424 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.826449 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.826482 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.826505 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:39Z","lastTransitionTime":"2026-01-23T06:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.929800 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.929880 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.929904 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.929928 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:39 crc kubenswrapper[4937]: I0123 06:34:39.929952 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:39Z","lastTransitionTime":"2026-01-23T06:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.034010 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.034099 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.034117 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.034146 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.034165 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:40Z","lastTransitionTime":"2026-01-23T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.138426 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.138525 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.138543 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.138569 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.138632 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:40Z","lastTransitionTime":"2026-01-23T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.241933 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.241987 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.242001 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.242023 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.242042 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:40Z","lastTransitionTime":"2026-01-23T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.344229 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.344301 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.344314 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.344337 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.344350 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:40Z","lastTransitionTime":"2026-01-23T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.446992 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.447037 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.447047 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.447064 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.447077 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:40Z","lastTransitionTime":"2026-01-23T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.524565 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 09:34:13.393261014 +0000 UTC Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.526044 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.526245 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.526059 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.526384 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:40 crc kubenswrapper[4937]: E0123 06:34:40.526364 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:40 crc kubenswrapper[4937]: E0123 06:34:40.526553 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:40 crc kubenswrapper[4937]: E0123 06:34:40.526782 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:40 crc kubenswrapper[4937]: E0123 06:34:40.526934 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.545062 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.553589 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.553708 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.553734 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.553767 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.553791 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:40Z","lastTransitionTime":"2026-01-23T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.565366 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.581349 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f394d17-1f72-43ba-8d51-b76e56dd6849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7ksbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.598473 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.614300 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.633881 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.647116 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.657022 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.657077 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.657090 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.657112 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.657125 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:40Z","lastTransitionTime":"2026-01-23T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.660701 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.677308 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.690254 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.706237 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.719752 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e704e2b6-be50-4568-9c97-db210296428c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f72971b4ea11d800db77e586767034c2d68959f1f0d41c81b01c68eb75b28230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.741428 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.755546 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.759372 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.759434 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.759454 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.759477 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.759496 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:40Z","lastTransitionTime":"2026-01-23T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.776108 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:38Z\\\",\\\"message\\\":\\\"[]services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.140\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0123 06:34:37.904014 6975 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:34:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.791221 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.804423 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"960e8fed-5482-4968-95c7-d3d93ad36ae0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756d28ec84c8944eedde49e0eff6ae5e99072adc64dd3ec2be4d6f0f00e2ab25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f47fdb9ac3672226eb5c068cf4c2a6b257a5bdb05132033db65bf2cf237d63ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0e3c35a5dbb71b99af942ba049be90010dcbb07494e82e0a333751d53f02e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.822231 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.836842 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a429144d0552298786ee2b19d9340a29a8558aefcf75a274c2c3e3bbab20ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:29Z\\\",\\\"message\\\":\\\"2026-01-23T06:33:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cd3c9ef5-cf2e-43fb-8dc2-6cb03990223d\\\\n2026-01-23T06:33:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cd3c9ef5-cf2e-43fb-8dc2-6cb03990223d to /host/opt/cni/bin/\\\\n2026-01-23T06:33:44Z [verbose] multus-daemon started\\\\n2026-01-23T06:33:44Z [verbose] Readiness Indicator file check\\\\n2026-01-23T06:34:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:40Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.861851 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.861912 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.861926 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.861954 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.861968 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:40Z","lastTransitionTime":"2026-01-23T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.964864 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.964933 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.964950 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.964976 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:40 crc kubenswrapper[4937]: I0123 06:34:40.964996 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:40Z","lastTransitionTime":"2026-01-23T06:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.067128 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.067171 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.067180 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.067195 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.067208 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:41Z","lastTransitionTime":"2026-01-23T06:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.169988 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.170030 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.170040 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.170057 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.170069 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:41Z","lastTransitionTime":"2026-01-23T06:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.272952 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.273028 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.273053 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.273084 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.273112 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:41Z","lastTransitionTime":"2026-01-23T06:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.376495 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.376538 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.376550 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.376569 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.376581 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:41Z","lastTransitionTime":"2026-01-23T06:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.479525 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.479564 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.479574 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.479608 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.479620 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:41Z","lastTransitionTime":"2026-01-23T06:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.525703 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 00:22:58.389880954 +0000 UTC Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.583641 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.583744 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.583771 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.583807 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.583851 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:41Z","lastTransitionTime":"2026-01-23T06:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.688346 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.688393 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.688403 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.688419 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.688431 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:41Z","lastTransitionTime":"2026-01-23T06:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.791909 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.791960 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.791971 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.791986 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.791997 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:41Z","lastTransitionTime":"2026-01-23T06:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.895124 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.895198 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.895220 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.895249 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.895270 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:41Z","lastTransitionTime":"2026-01-23T06:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.998674 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.998722 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.998737 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.998754 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:41 crc kubenswrapper[4937]: I0123 06:34:41.998768 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:41Z","lastTransitionTime":"2026-01-23T06:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.101655 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.101742 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.101765 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.101791 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.101812 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:42Z","lastTransitionTime":"2026-01-23T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.205243 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.205312 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.205334 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.205360 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.205408 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:42Z","lastTransitionTime":"2026-01-23T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.309183 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.309230 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.309240 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.309256 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.309268 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:42Z","lastTransitionTime":"2026-01-23T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.412295 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.413040 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.413211 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.413364 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.413510 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:42Z","lastTransitionTime":"2026-01-23T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.516653 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.517177 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.517344 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.517501 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.517663 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:42Z","lastTransitionTime":"2026-01-23T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.525821 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.526056 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.525921 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.525870 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 19:43:54.57586669 +0000 UTC Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.525870 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:42 crc kubenswrapper[4937]: E0123 06:34:42.526582 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:42 crc kubenswrapper[4937]: E0123 06:34:42.526687 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:42 crc kubenswrapper[4937]: E0123 06:34:42.526769 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:42 crc kubenswrapper[4937]: E0123 06:34:42.526893 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.620707 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.620759 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.620769 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.620787 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.620797 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:42Z","lastTransitionTime":"2026-01-23T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.723763 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.723808 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.723817 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.723835 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.723845 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:42Z","lastTransitionTime":"2026-01-23T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.827702 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.828086 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.828174 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.828263 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.828342 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:42Z","lastTransitionTime":"2026-01-23T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.931970 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.932093 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.932110 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.932131 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:42 crc kubenswrapper[4937]: I0123 06:34:42.932144 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:42Z","lastTransitionTime":"2026-01-23T06:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.035509 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.035580 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.035634 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.035667 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.035691 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:43Z","lastTransitionTime":"2026-01-23T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.138992 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.139029 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.139040 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.139069 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.139102 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:43Z","lastTransitionTime":"2026-01-23T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.242938 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.242984 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.242995 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.243012 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.243024 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:43Z","lastTransitionTime":"2026-01-23T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.346577 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.346658 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.346671 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.346695 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.346722 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:43Z","lastTransitionTime":"2026-01-23T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.449851 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.449958 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.449984 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.450016 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.450038 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:43Z","lastTransitionTime":"2026-01-23T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.526615 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:36:17.806676494 +0000 UTC Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.553346 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.553420 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.553439 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.553468 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.553489 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:43Z","lastTransitionTime":"2026-01-23T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.657080 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.657154 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.657174 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.657202 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.657258 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:43Z","lastTransitionTime":"2026-01-23T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.761305 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.761392 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.761414 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.761446 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.761467 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:43Z","lastTransitionTime":"2026-01-23T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.864521 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.864651 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.864679 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.864711 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.864736 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:43Z","lastTransitionTime":"2026-01-23T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.968143 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.968218 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.968236 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.968271 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:43 crc kubenswrapper[4937]: I0123 06:34:43.968291 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:43Z","lastTransitionTime":"2026-01-23T06:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.071725 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.071810 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.071836 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.071868 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.071894 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:44Z","lastTransitionTime":"2026-01-23T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.173024 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.173105 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:44 crc kubenswrapper[4937]: E0123 06:34:44.173190 4937 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:34:44 crc kubenswrapper[4937]: E0123 06:34:44.173228 4937 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:34:44 crc kubenswrapper[4937]: E0123 06:34:44.173244 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:35:48.173229711 +0000 UTC m=+147.976996364 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 06:34:44 crc kubenswrapper[4937]: E0123 06:34:44.173359 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 06:35:48.173332504 +0000 UTC m=+147.977099337 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.175091 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.175140 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.175153 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.175173 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.175183 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:44Z","lastTransitionTime":"2026-01-23T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.273966 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.274161 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.274190 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:44 crc kubenswrapper[4937]: E0123 06:34:44.274238 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:48.274198212 +0000 UTC m=+148.077964905 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:34:44 crc kubenswrapper[4937]: E0123 06:34:44.274323 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:34:44 crc kubenswrapper[4937]: E0123 06:34:44.274339 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:34:44 crc kubenswrapper[4937]: E0123 06:34:44.274355 4937 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:34:44 crc kubenswrapper[4937]: E0123 06:34:44.274405 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 06:35:48.274396876 +0000 UTC m=+148.078163529 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:34:44 crc kubenswrapper[4937]: E0123 06:34:44.274480 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 06:34:44 crc kubenswrapper[4937]: E0123 06:34:44.274537 4937 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 06:34:44 crc kubenswrapper[4937]: E0123 06:34:44.274554 4937 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:34:44 crc kubenswrapper[4937]: E0123 06:34:44.274670 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 06:35:48.274641361 +0000 UTC m=+148.078408014 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.284986 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.285066 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.285090 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.285121 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.285143 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:44Z","lastTransitionTime":"2026-01-23T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.388688 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.388745 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.388765 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.388794 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.388815 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:44Z","lastTransitionTime":"2026-01-23T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.491498 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.491586 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.491653 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.491684 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.491708 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:44Z","lastTransitionTime":"2026-01-23T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.526240 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.526336 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:44 crc kubenswrapper[4937]: E0123 06:34:44.526412 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:44 crc kubenswrapper[4937]: E0123 06:34:44.526545 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.526565 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:44 crc kubenswrapper[4937]: E0123 06:34:44.526763 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.526263 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:44 crc kubenswrapper[4937]: E0123 06:34:44.526872 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.526916 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:50:01.525875999 +0000 UTC Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.595457 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.595521 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.595537 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.595561 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.595581 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:44Z","lastTransitionTime":"2026-01-23T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.698636 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.698697 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.698710 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.698732 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.698744 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:44Z","lastTransitionTime":"2026-01-23T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.802474 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.802548 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.802574 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.802637 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.802660 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:44Z","lastTransitionTime":"2026-01-23T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.905931 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.905979 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.905991 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.906010 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:44 crc kubenswrapper[4937]: I0123 06:34:44.906022 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:44Z","lastTransitionTime":"2026-01-23T06:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.010277 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.010321 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.010331 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.010349 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.010361 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:45Z","lastTransitionTime":"2026-01-23T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.113566 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.113653 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.113665 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.113687 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.113699 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:45Z","lastTransitionTime":"2026-01-23T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.217430 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.217488 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.217506 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.217527 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.217540 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:45Z","lastTransitionTime":"2026-01-23T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.321127 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.321193 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.321214 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.321240 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.321258 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:45Z","lastTransitionTime":"2026-01-23T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.403715 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.403774 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.403790 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.403814 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.403831 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:45Z","lastTransitionTime":"2026-01-23T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:45 crc kubenswrapper[4937]: E0123 06:34:45.424859 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.430079 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.430176 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.430194 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.430218 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.430235 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:45Z","lastTransitionTime":"2026-01-23T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:45 crc kubenswrapper[4937]: E0123 06:34:45.447728 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.453298 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.453353 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.453370 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.453394 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.453416 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:45Z","lastTransitionTime":"2026-01-23T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:45 crc kubenswrapper[4937]: E0123 06:34:45.471840 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.477571 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.477633 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.477645 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.477664 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.477676 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:45Z","lastTransitionTime":"2026-01-23T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:45 crc kubenswrapper[4937]: E0123 06:34:45.497979 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.503548 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.503608 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.503618 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.503638 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.503650 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:45Z","lastTransitionTime":"2026-01-23T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:45 crc kubenswrapper[4937]: E0123 06:34:45.521730 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:45Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:45 crc kubenswrapper[4937]: E0123 06:34:45.521844 4937 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.523756 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.523827 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.523849 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.523880 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.523900 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:45Z","lastTransitionTime":"2026-01-23T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.527027 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 21:27:32.163901078 +0000 UTC Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.628508 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.628577 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.628658 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.628697 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.628723 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:45Z","lastTransitionTime":"2026-01-23T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.731864 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.731944 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.731975 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.732006 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.732030 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:45Z","lastTransitionTime":"2026-01-23T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.834663 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.834710 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.834728 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.834754 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.834775 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:45Z","lastTransitionTime":"2026-01-23T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.938021 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.938065 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.938075 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.938094 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:45 crc kubenswrapper[4937]: I0123 06:34:45.938106 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:45Z","lastTransitionTime":"2026-01-23T06:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.041399 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.041500 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.041527 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.041564 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.041628 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:46Z","lastTransitionTime":"2026-01-23T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.144343 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.144962 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.144982 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.145009 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.145027 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:46Z","lastTransitionTime":"2026-01-23T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.248943 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.249013 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.249033 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.249060 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.249080 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:46Z","lastTransitionTime":"2026-01-23T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.352174 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.352231 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.352243 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.352264 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.352315 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:46Z","lastTransitionTime":"2026-01-23T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.455439 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.455494 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.455511 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.455536 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.455557 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:46Z","lastTransitionTime":"2026-01-23T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.525946 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.526005 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.526038 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:46 crc kubenswrapper[4937]: E0123 06:34:46.526176 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.526311 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:46 crc kubenswrapper[4937]: E0123 06:34:46.526469 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:46 crc kubenswrapper[4937]: E0123 06:34:46.526693 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:46 crc kubenswrapper[4937]: E0123 06:34:46.526872 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.527321 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 08:56:06.244134638 +0000 UTC Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.558419 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.558469 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.558486 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.558506 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.558523 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:46Z","lastTransitionTime":"2026-01-23T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.661917 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.661995 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.662021 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.662053 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.662092 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:46Z","lastTransitionTime":"2026-01-23T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.765148 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.765205 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.765228 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.765256 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.765275 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:46Z","lastTransitionTime":"2026-01-23T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.868932 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.869026 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.869046 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.869075 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.869096 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:46Z","lastTransitionTime":"2026-01-23T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.973028 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.973086 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.973103 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.973127 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:46 crc kubenswrapper[4937]: I0123 06:34:46.973144 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:46Z","lastTransitionTime":"2026-01-23T06:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.076686 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.076768 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.076794 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.076826 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.076849 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:47Z","lastTransitionTime":"2026-01-23T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.180518 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.180629 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.180646 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.180669 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.180686 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:47Z","lastTransitionTime":"2026-01-23T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.284136 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.284202 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.284226 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.284260 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.284284 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:47Z","lastTransitionTime":"2026-01-23T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.386961 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.387040 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.387064 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.387088 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.387105 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:47Z","lastTransitionTime":"2026-01-23T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.489474 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.489534 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.489556 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.489583 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.489642 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:47Z","lastTransitionTime":"2026-01-23T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.528098 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 18:55:48.629572318 +0000 UTC Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.592788 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.592859 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.592884 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.592914 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.592936 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:47Z","lastTransitionTime":"2026-01-23T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.695697 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.695753 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.695771 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.695794 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.695813 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:47Z","lastTransitionTime":"2026-01-23T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.798669 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.798714 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.798733 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.798757 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.798774 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:47Z","lastTransitionTime":"2026-01-23T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.901474 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.901524 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.901540 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.901564 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:47 crc kubenswrapper[4937]: I0123 06:34:47.901581 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:47Z","lastTransitionTime":"2026-01-23T06:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.005225 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.005280 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.005291 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.005311 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.005325 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:48Z","lastTransitionTime":"2026-01-23T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.108661 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.108724 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.108740 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.108763 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.108780 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:48Z","lastTransitionTime":"2026-01-23T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.211818 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.211890 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.211909 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.211937 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.211956 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:48Z","lastTransitionTime":"2026-01-23T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.315118 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.315201 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.315225 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.315258 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.315282 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:48Z","lastTransitionTime":"2026-01-23T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.418206 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.418268 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.418286 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.418311 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.418330 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:48Z","lastTransitionTime":"2026-01-23T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.522083 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.522473 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.522581 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.522729 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.522823 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:48Z","lastTransitionTime":"2026-01-23T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.525503 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.525544 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:48 crc kubenswrapper[4937]: E0123 06:34:48.525728 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.525780 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.525825 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:48 crc kubenswrapper[4937]: E0123 06:34:48.525965 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:48 crc kubenswrapper[4937]: E0123 06:34:48.526045 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:48 crc kubenswrapper[4937]: E0123 06:34:48.526159 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.528569 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:47:16.572398058 +0000 UTC Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.627234 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.627304 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.627329 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.627383 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.627412 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:48Z","lastTransitionTime":"2026-01-23T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.730341 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.730397 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.730414 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.730438 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.730455 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:48Z","lastTransitionTime":"2026-01-23T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.833091 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.833145 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.833161 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.833184 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.833200 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:48Z","lastTransitionTime":"2026-01-23T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.935856 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.936005 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.936034 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.936059 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:48 crc kubenswrapper[4937]: I0123 06:34:48.936076 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:48Z","lastTransitionTime":"2026-01-23T06:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.039645 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.039735 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.039760 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.039794 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.039818 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:49Z","lastTransitionTime":"2026-01-23T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.143173 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.143241 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.143253 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.143273 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.143292 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:49Z","lastTransitionTime":"2026-01-23T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.248575 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.248670 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.248690 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.248715 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.248738 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:49Z","lastTransitionTime":"2026-01-23T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.351783 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.351863 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.351880 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.352319 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.352368 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:49Z","lastTransitionTime":"2026-01-23T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.456025 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.456075 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.456091 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.456114 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.456132 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:49Z","lastTransitionTime":"2026-01-23T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.529355 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:08:58.94482237 +0000 UTC Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.559711 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.559768 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.559791 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.559822 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.559845 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:49Z","lastTransitionTime":"2026-01-23T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.664305 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.664407 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.664425 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.664452 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.664471 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:49Z","lastTransitionTime":"2026-01-23T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.767497 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.767522 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.767531 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.767547 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.767555 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:49Z","lastTransitionTime":"2026-01-23T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.871494 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.871571 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.871632 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.871661 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.871680 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:49Z","lastTransitionTime":"2026-01-23T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.975321 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.975405 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.975430 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.975464 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:49 crc kubenswrapper[4937]: I0123 06:34:49.975488 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:49Z","lastTransitionTime":"2026-01-23T06:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.078452 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.078518 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.078534 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.078557 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.078578 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:50Z","lastTransitionTime":"2026-01-23T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.182325 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.182374 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.182386 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.182406 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.182422 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:50Z","lastTransitionTime":"2026-01-23T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.285585 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.285689 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.285711 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.285735 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.285753 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:50Z","lastTransitionTime":"2026-01-23T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.394836 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.394886 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.394905 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.394926 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.394938 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:50Z","lastTransitionTime":"2026-01-23T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.499002 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.499056 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.499071 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.499095 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.499114 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:50Z","lastTransitionTime":"2026-01-23T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.526268 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.526339 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.526268 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.526451 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:50 crc kubenswrapper[4937]: E0123 06:34:50.526688 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:50 crc kubenswrapper[4937]: E0123 06:34:50.527114 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:50 crc kubenswrapper[4937]: E0123 06:34:50.527336 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:50 crc kubenswrapper[4937]: E0123 06:34:50.527558 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.529154 4937 scope.go:117] "RemoveContainer" containerID="4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41" Jan 23 06:34:50 crc kubenswrapper[4937]: E0123 06:34:50.529526 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.529671 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 17:36:14.524944231 +0000 UTC Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.549572 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.569847 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.586344 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.602328 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.602760 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.602807 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.602821 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.602842 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.602859 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:50Z","lastTransitionTime":"2026-01-23T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.617308 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.632508 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f394d17-1f72-43ba-8d51-b76e56dd6849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7ksbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.650408 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.666240 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e704e2b6-be50-4568-9c97-db210296428c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f72971b4ea11d800db77e586767034c2d68959f1f0d41c81b01c68eb75b28230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.692869 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.704941 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.704968 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.704976 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.704992 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.705001 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:50Z","lastTransitionTime":"2026-01-23T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.709361 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.721447 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.738163 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.758015 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.772321 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.793660 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:38Z\\\",\\\"message\\\":\\\"[]services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.140\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0123 06:34:37.904014 6975 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:34:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.808432 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.808484 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.808503 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.808528 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.808520 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.808548 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:50Z","lastTransitionTime":"2026-01-23T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.821674 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"960e8fed-5482-4968-95c7-d3d93ad36ae0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756d28ec84c8944eedde49e0eff6ae5e99072adc64dd3ec2be4d6f0f00e2ab25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f47fdb9ac3672226eb5c068cf4c2a6b257a5bdb05132033db65bf2cf237d63ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0e3c35a5dbb71b99af942ba049be90010dcbb07494e82e0a333751d53f02e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.833895 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.849700 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a429144d0552298786ee2b19d9340a29a8558aefcf75a274c2c3e3bbab20ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:29Z\\\",\\\"message\\\":\\\"2026-01-23T06:33:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cd3c9ef5-cf2e-43fb-8dc2-6cb03990223d\\\\n2026-01-23T06:33:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cd3c9ef5-cf2e-43fb-8dc2-6cb03990223d to /host/opt/cni/bin/\\\\n2026-01-23T06:33:44Z [verbose] multus-daemon started\\\\n2026-01-23T06:33:44Z [verbose] Readiness Indicator file check\\\\n2026-01-23T06:34:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:50Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.911545 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.911648 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.911671 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.911699 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:50 crc kubenswrapper[4937]: I0123 06:34:50.911721 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:50Z","lastTransitionTime":"2026-01-23T06:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.015763 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.015842 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.015865 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.015894 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.015917 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:51Z","lastTransitionTime":"2026-01-23T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.119686 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.119755 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.119779 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.119810 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.119835 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:51Z","lastTransitionTime":"2026-01-23T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.223036 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.223110 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.223125 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.223154 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.223173 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:51Z","lastTransitionTime":"2026-01-23T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.326760 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.326810 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.326820 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.326837 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.326849 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:51Z","lastTransitionTime":"2026-01-23T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.430467 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.430545 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.430567 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.430635 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.430662 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:51Z","lastTransitionTime":"2026-01-23T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.530318 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 23:31:09.770669718 +0000 UTC Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.533266 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.533343 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.533355 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.533370 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.533381 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:51Z","lastTransitionTime":"2026-01-23T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.637323 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.637399 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.637409 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.637424 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.637437 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:51Z","lastTransitionTime":"2026-01-23T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.740182 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.740274 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.740292 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.740323 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.740344 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:51Z","lastTransitionTime":"2026-01-23T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.844033 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.844105 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.844128 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.844163 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.844189 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:51Z","lastTransitionTime":"2026-01-23T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.947722 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.947771 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.947781 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.947797 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:51 crc kubenswrapper[4937]: I0123 06:34:51.947810 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:51Z","lastTransitionTime":"2026-01-23T06:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.051325 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.051414 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.051440 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.051471 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.051492 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:52Z","lastTransitionTime":"2026-01-23T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.154990 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.155043 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.155057 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.155077 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.155092 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:52Z","lastTransitionTime":"2026-01-23T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.257372 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.257442 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.257459 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.257484 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.257502 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:52Z","lastTransitionTime":"2026-01-23T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.360761 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.360836 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.360854 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.360880 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.360898 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:52Z","lastTransitionTime":"2026-01-23T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.464901 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.464968 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.464989 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.465023 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.465050 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:52Z","lastTransitionTime":"2026-01-23T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.530653 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:52 crc kubenswrapper[4937]: E0123 06:34:52.530832 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.531128 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:52 crc kubenswrapper[4937]: E0123 06:34:52.531222 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.531422 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:52 crc kubenswrapper[4937]: E0123 06:34:52.531506 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.531745 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:52 crc kubenswrapper[4937]: E0123 06:34:52.531847 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.532091 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 09:29:10.358976019 +0000 UTC Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.567828 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.567875 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.567892 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.567916 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.567934 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:52Z","lastTransitionTime":"2026-01-23T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.671555 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.671625 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.671642 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.671666 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.671679 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:52Z","lastTransitionTime":"2026-01-23T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.774879 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.774927 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.774936 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.774955 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.774968 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:52Z","lastTransitionTime":"2026-01-23T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.878182 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.878286 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.878308 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.878335 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.878354 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:52Z","lastTransitionTime":"2026-01-23T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.981854 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.981928 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.981945 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.981974 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:52 crc kubenswrapper[4937]: I0123 06:34:52.981993 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:52Z","lastTransitionTime":"2026-01-23T06:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.085661 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.085732 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.085756 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.085793 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.085819 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:53Z","lastTransitionTime":"2026-01-23T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.190938 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.191089 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.191121 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.191158 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.191186 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:53Z","lastTransitionTime":"2026-01-23T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.294710 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.294772 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.294792 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.294817 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.294835 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:53Z","lastTransitionTime":"2026-01-23T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.397962 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.398019 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.398030 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.398053 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.398067 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:53Z","lastTransitionTime":"2026-01-23T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.502321 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.502440 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.502465 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.502498 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.502523 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:53Z","lastTransitionTime":"2026-01-23T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.533223 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:44:30.472944755 +0000 UTC Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.606102 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.606148 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.606160 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.606179 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.606193 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:53Z","lastTransitionTime":"2026-01-23T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.710108 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.710253 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.710273 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.710301 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.710320 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:53Z","lastTransitionTime":"2026-01-23T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.813429 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.813493 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.813512 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.813538 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.813557 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:53Z","lastTransitionTime":"2026-01-23T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.916306 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.916385 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.916402 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.916430 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:53 crc kubenswrapper[4937]: I0123 06:34:53.916446 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:53Z","lastTransitionTime":"2026-01-23T06:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.020401 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.020466 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.020483 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.020509 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.020527 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:54Z","lastTransitionTime":"2026-01-23T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.123929 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.123964 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.123973 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.123989 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.124001 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:54Z","lastTransitionTime":"2026-01-23T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.227229 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.227306 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.227327 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.227353 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.227373 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:54Z","lastTransitionTime":"2026-01-23T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.330548 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.330712 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.330742 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.330854 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.330887 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:54Z","lastTransitionTime":"2026-01-23T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.434241 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.434315 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.434339 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.434370 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.434395 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:54Z","lastTransitionTime":"2026-01-23T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.525787 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.525840 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:54 crc kubenswrapper[4937]: E0123 06:34:54.526006 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.526080 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.526086 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:54 crc kubenswrapper[4937]: E0123 06:34:54.526219 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:54 crc kubenswrapper[4937]: E0123 06:34:54.526341 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:54 crc kubenswrapper[4937]: E0123 06:34:54.526512 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.533841 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 07:35:49.089924894 +0000 UTC Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.537832 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.537922 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.537950 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.537973 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.537992 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:54Z","lastTransitionTime":"2026-01-23T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.641791 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.641849 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.641865 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.641893 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.641913 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:54Z","lastTransitionTime":"2026-01-23T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.744953 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.745004 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.745020 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.745043 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.745060 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:54Z","lastTransitionTime":"2026-01-23T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.848243 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.848363 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.848383 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.848409 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.848427 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:54Z","lastTransitionTime":"2026-01-23T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.951738 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.951800 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.951817 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.951846 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:54 crc kubenswrapper[4937]: I0123 06:34:54.951871 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:54Z","lastTransitionTime":"2026-01-23T06:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.056465 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.056534 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.056551 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.056582 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.056638 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:55Z","lastTransitionTime":"2026-01-23T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.160149 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.160200 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.160220 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.160249 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.160273 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:55Z","lastTransitionTime":"2026-01-23T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.263662 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.263729 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.263750 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.263781 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.263803 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:55Z","lastTransitionTime":"2026-01-23T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.366751 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.366814 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.366834 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.366859 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.366876 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:55Z","lastTransitionTime":"2026-01-23T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.470837 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.470924 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.470949 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.470979 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.470996 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:55Z","lastTransitionTime":"2026-01-23T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.534807 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:31:30.84458643 +0000 UTC Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.541922 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.541977 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.541996 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.542022 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.542045 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:55Z","lastTransitionTime":"2026-01-23T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:55 crc kubenswrapper[4937]: E0123 06:34:55.567052 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.573368 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.573466 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.573497 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.573534 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.573560 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:55Z","lastTransitionTime":"2026-01-23T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:55 crc kubenswrapper[4937]: E0123 06:34:55.598060 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.604864 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.604942 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.604968 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.605001 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.605024 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:55Z","lastTransitionTime":"2026-01-23T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:55 crc kubenswrapper[4937]: E0123 06:34:55.626846 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.632352 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.632418 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.632443 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.632474 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.632497 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:55Z","lastTransitionTime":"2026-01-23T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:55 crc kubenswrapper[4937]: E0123 06:34:55.652418 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.657713 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.657787 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.657814 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.657848 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.657874 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:55Z","lastTransitionTime":"2026-01-23T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:55 crc kubenswrapper[4937]: E0123 06:34:55.678260 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:34:55Z is after 2025-08-24T17:21:41Z" Jan 23 06:34:55 crc kubenswrapper[4937]: E0123 06:34:55.678519 4937 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.681807 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.681868 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.681884 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.681910 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.682113 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:55Z","lastTransitionTime":"2026-01-23T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.785819 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.785890 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.785915 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.785946 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.785969 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:55Z","lastTransitionTime":"2026-01-23T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.889943 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.889997 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.890021 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.890054 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.890075 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:55Z","lastTransitionTime":"2026-01-23T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.993048 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.993104 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.993120 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.993140 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:55 crc kubenswrapper[4937]: I0123 06:34:55.993154 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:55Z","lastTransitionTime":"2026-01-23T06:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.096609 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.096665 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.096682 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.096701 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.096717 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:56Z","lastTransitionTime":"2026-01-23T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.199834 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.199871 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.199880 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.199896 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.199906 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:56Z","lastTransitionTime":"2026-01-23T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.302875 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.302961 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.302988 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.303026 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.303053 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:56Z","lastTransitionTime":"2026-01-23T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.406133 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.406185 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.406202 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.406226 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.406243 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:56Z","lastTransitionTime":"2026-01-23T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.510118 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.510175 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.510194 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.510224 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.510242 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:56Z","lastTransitionTime":"2026-01-23T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.526148 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.526227 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.526291 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.526354 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:56 crc kubenswrapper[4937]: E0123 06:34:56.526829 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:56 crc kubenswrapper[4937]: E0123 06:34:56.526995 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:56 crc kubenswrapper[4937]: E0123 06:34:56.527129 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:56 crc kubenswrapper[4937]: E0123 06:34:56.527224 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.535752 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 07:26:41.053186818 +0000 UTC Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.613877 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.613949 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.613973 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.614004 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.614032 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:56Z","lastTransitionTime":"2026-01-23T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.722970 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.723083 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.723110 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.723166 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.723188 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:56Z","lastTransitionTime":"2026-01-23T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.828334 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.828428 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.828455 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.828492 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.828518 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:56Z","lastTransitionTime":"2026-01-23T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.932879 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.932941 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.932958 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.932987 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:56 crc kubenswrapper[4937]: I0123 06:34:56.933057 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:56Z","lastTransitionTime":"2026-01-23T06:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.036480 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.036560 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.036583 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.036664 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.036689 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:57Z","lastTransitionTime":"2026-01-23T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.139984 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.140052 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.140071 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.140101 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.140121 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:57Z","lastTransitionTime":"2026-01-23T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.243773 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.243837 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.243854 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.243879 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.243897 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:57Z","lastTransitionTime":"2026-01-23T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.347371 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.347456 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.347479 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.347511 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.347535 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:57Z","lastTransitionTime":"2026-01-23T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.451157 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.451224 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.451244 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.451272 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.451293 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:57Z","lastTransitionTime":"2026-01-23T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.535939 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 05:51:57.860399138 +0000 UTC Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.554335 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.554390 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.554425 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.554461 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.554486 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:57Z","lastTransitionTime":"2026-01-23T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.657859 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.657895 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.657906 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.657924 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.657936 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:57Z","lastTransitionTime":"2026-01-23T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.760965 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.761018 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.761037 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.761059 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.761075 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:57Z","lastTransitionTime":"2026-01-23T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.864495 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.864546 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.864563 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.864583 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.864663 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:57Z","lastTransitionTime":"2026-01-23T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.968123 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.968192 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.968216 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.968243 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:57 crc kubenswrapper[4937]: I0123 06:34:57.968265 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:57Z","lastTransitionTime":"2026-01-23T06:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.071206 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.071276 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.071295 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.071321 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.071338 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:58Z","lastTransitionTime":"2026-01-23T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.175030 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.175105 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.175130 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.175161 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.175184 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:58Z","lastTransitionTime":"2026-01-23T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.278752 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.278807 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.278832 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.278866 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.278892 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:58Z","lastTransitionTime":"2026-01-23T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.381919 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.381991 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.382014 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.382043 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.382068 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:58Z","lastTransitionTime":"2026-01-23T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.486131 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.486186 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.486204 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.486229 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.486248 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:58Z","lastTransitionTime":"2026-01-23T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.526269 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.526391 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:34:58 crc kubenswrapper[4937]: E0123 06:34:58.526477 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:34:58 crc kubenswrapper[4937]: E0123 06:34:58.526748 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.526792 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:34:58 crc kubenswrapper[4937]: E0123 06:34:58.526876 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.527045 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:34:58 crc kubenswrapper[4937]: E0123 06:34:58.527405 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.537071 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 18:12:34.535934515 +0000 UTC Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.588815 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.588865 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.588878 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.588901 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.588917 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:58Z","lastTransitionTime":"2026-01-23T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.691402 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.691460 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.691478 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.691502 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.691526 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:58Z","lastTransitionTime":"2026-01-23T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.794930 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.794989 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.795006 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.795030 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.795048 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:58Z","lastTransitionTime":"2026-01-23T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.898585 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.898670 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.898686 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.898709 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:58 crc kubenswrapper[4937]: I0123 06:34:58.898725 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:58Z","lastTransitionTime":"2026-01-23T06:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.000620 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.000674 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.000686 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.000705 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.000720 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:59Z","lastTransitionTime":"2026-01-23T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.103643 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.103702 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.103717 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.103741 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.103763 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:59Z","lastTransitionTime":"2026-01-23T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.207328 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.207413 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.207430 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.207458 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.207477 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:59Z","lastTransitionTime":"2026-01-23T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.311344 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.311412 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.311433 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.311464 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.311485 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:59Z","lastTransitionTime":"2026-01-23T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.415365 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.415416 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.415429 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.415449 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.415461 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:59Z","lastTransitionTime":"2026-01-23T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.519019 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.519077 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.519095 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.519121 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.519140 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:59Z","lastTransitionTime":"2026-01-23T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.537477 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:46:22.052385439 +0000 UTC Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.626760 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.626903 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.626921 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.626956 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.626972 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:59Z","lastTransitionTime":"2026-01-23T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.731457 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.731520 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.731529 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.731549 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.731564 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:59Z","lastTransitionTime":"2026-01-23T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.834810 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.834868 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.834882 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.834906 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.834925 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:59Z","lastTransitionTime":"2026-01-23T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.938522 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.938622 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.938649 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.938677 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:34:59 crc kubenswrapper[4937]: I0123 06:34:59.938696 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:34:59Z","lastTransitionTime":"2026-01-23T06:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.042093 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.042243 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.042264 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.042290 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.042309 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:00Z","lastTransitionTime":"2026-01-23T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.145192 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.145240 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.145249 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.145266 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.145278 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:00Z","lastTransitionTime":"2026-01-23T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.247290 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.247349 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.247359 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.247378 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.247390 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:00Z","lastTransitionTime":"2026-01-23T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.350585 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.350714 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.350735 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.350770 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.350791 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:00Z","lastTransitionTime":"2026-01-23T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.453833 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.453921 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.453951 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.454011 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.454036 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:00Z","lastTransitionTime":"2026-01-23T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.525546 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.525632 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.525676 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.527328 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:00 crc kubenswrapper[4937]: E0123 06:35:00.527392 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:35:00 crc kubenswrapper[4937]: E0123 06:35:00.527544 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:35:00 crc kubenswrapper[4937]: E0123 06:35:00.527853 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:35:00 crc kubenswrapper[4937]: E0123 06:35:00.527880 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.537733 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:45:23.31843158 +0000 UTC Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.544015 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e704e2b6-be50-4568-9c97-db210296428c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f72971b4ea11d800db77e586767034c2d68959f1f0d41c81b01c68eb75b28230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fabe69b39a21a6323711e6599dd0b81cbe10f86bf23ce238fda47dfd3ec5459\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.557416 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.557483 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.557506 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.557538 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.557556 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:00Z","lastTransitionTime":"2026-01-23T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.577271 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4abea2fc-bdbd-485c-994d-3f99e452abdc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c443d358308f2212e15d7f03cbbc4e03be19e59f265cea0a4366ba863c19133c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c9624cc272c50091ded33972c09b0d2dfd66265fe7dae561ef29d02bc22281f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0e06a017bbe0ae2bb9ad3525b635620eef6d594a7b97916bb53ab26264df9df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5bc6881d8473115d68e5da835855a68133460f7dffef5829810b59d5647dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://173ed8f5ee23d55ef98a48cb3a795bfc490332f1ffee4896b7eccc697521fe9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d2770522370d6ac41c237093ba4bd1c7f7d57ecd43bbad255543e987387ec0b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ece653335d948b3fe3597d07c124bb56417046feddf917f0ba8db6008c60ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://40ade0d9b4c240fe691b5fe3fdb8a82ad5ea7cd684092f2533fb07af243b553e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.598698 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c628f3b9-6703-42ed-9df0-2b39b603b0f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"message\\\":\\\"falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 06:33:34.163718 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 06:33:34.164941 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66065184/tls.crt::/tmp/serving-cert-66065184/tls.key\\\\\\\"\\\\nI0123 06:33:40.225490 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 06:33:40.229402 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 06:33:40.229437 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 06:33:40.229489 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 06:33:40.229496 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 06:33:40.235831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 06:33:40.235856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235861 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 06:33:40.235869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 06:33:40.235873 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 06:33:40.235876 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 06:33:40.235878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 06:33:40.236050 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 06:33:40.239130 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.613127 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba0daa33801e83f4a405d4d0609be32835bae2a666b7570ef69c8f1fc596244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.628436 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.643469 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db70fe3eecbe835d62c4746b311b0879228be5ed5298a06c3e8f71f9fdbf74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6ssf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bglvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.659904 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-js46n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d929cad-0d4c-472d-94dd-cba5d415d0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07bba53eb96ae8fffc909cbf38df77c5f92a3bc927bbff3aa06238ba05b71055\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf8q8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-js46n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.661755 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.661830 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.661848 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.661880 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.661904 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:00Z","lastTransitionTime":"2026-01-23T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.669698 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs\") pod \"network-metrics-daemon-7ksbw\" (UID: \"5f394d17-1f72-43ba-8d51-b76e56dd6849\") " pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:00 crc kubenswrapper[4937]: E0123 06:35:00.669875 4937 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:35:00 crc kubenswrapper[4937]: E0123 06:35:00.669935 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs podName:5f394d17-1f72-43ba-8d51-b76e56dd6849 nodeName:}" failed. No retries permitted until 2026-01-23 06:36:04.669918174 +0000 UTC m=+164.473684837 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs") pod "network-metrics-daemon-7ksbw" (UID: "5f394d17-1f72-43ba-8d51-b76e56dd6849") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.685355 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0df70988-ba4d-42b9-bd64-415fa126969d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f0948ad3fdd1706b06c51e71d86b0c750489b990d512a8c831cc9bef24b3ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d50c9aef31d187ce47f7b097af663ff906067b9d8bd23989a62f5e980e6932f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73668f051dbe262fd90135d4d31ebba309e4c01bc0fe4c19c57517dfabb250d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd2c9a5de4f51f7bb5dd7fb6deffd22d655839bec246b1c5aa36ef986610d371\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52b0ec79f328c1e057320b6d4313ff28e80837ce78a621ac4facbb46620372e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://078e3e7d5d1860a79a2d81376d421f64786961931667634241b299b7ac106cca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1713dbbaa41ff5a4c848976d2d5a4b6c03f75d153145b1cce4a7ca1e3d73c99a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdqq2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbxzj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.718562 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:38Z\\\",\\\"message\\\":\\\"[]services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"97b6e7b0-06ca-455e-8259-06895040cb0c\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-marketplace\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.140\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0123 06:34:37.904014 6975 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:34:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v77hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hqgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.737206 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.758991 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"960e8fed-5482-4968-95c7-d3d93ad36ae0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://756d28ec84c8944eedde49e0eff6ae5e99072adc64dd3ec2be4d6f0f00e2ab25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f47fdb9ac3672226eb5c068cf4c2a6b257a5bdb05132033db65bf2cf237d63ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0e3c35a5dbb71b99af942ba049be90010dcbb07494e82e0a333751d53f02e13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7051240219404929a0e7a67f36c4c0c33c0aa48ea80c3af59f07976f67a8d74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T06:33:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.765255 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.765309 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.765321 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.765341 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.765355 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:00Z","lastTransitionTime":"2026-01-23T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.779580 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.805209 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bhj54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:34:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46a429144d0552298786ee2b19d9340a29a8558aefcf75a274c2c3e3bbab20ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T06:34:29Z\\\",\\\"message\\\":\\\"2026-01-23T06:33:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cd3c9ef5-cf2e-43fb-8dc2-6cb03990223d\\\\n2026-01-23T06:33:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cd3c9ef5-cf2e-43fb-8dc2-6cb03990223d to /host/opt/cni/bin/\\\\n2026-01-23T06:33:44Z [verbose] multus-daemon started\\\\n2026-01-23T06:33:44Z [verbose] Readiness Indicator file check\\\\n2026-01-23T06:34:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T06:33:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:34:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dkphv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:41Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bhj54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.825769 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92f15e63-1eea-4fd7-b64c-ff03839367a3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60e69c73fed6cd277741acf2ebec122c6b9f0a1e412db5117d704d8fdcaf6ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f4bc5912c3f1eeb6204378ee92eadc4ccf458e1517dce238d8ff50a13972612\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd63070c6b82fcd05cd5d15b8a0db6d2a7ce5e7c1633da072414c81c14c722d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:20Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.843612 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7aa455b77dec2ff7cc3804792e1d2162f5c521b9b08b04764ea1a833b75db137\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.859560 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2212594dd29c0763613f0060f9395d3965e35e61b1953ba3dbf011bf619c32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91c4ca6ea72bd9d8bef64cd8da25f0cbef0c40900072c92d08a8124b64856ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.868577 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.868671 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.868696 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.868730 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.868753 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:00Z","lastTransitionTime":"2026-01-23T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.875699 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wqqs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7387919d-1f76-4e34-9994-194a2a3c5dbb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83c123d21b043e9579293b655601226610842b13dc2b0d501637033e9113f0bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9rnv6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wqqs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.893854 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61037bae-85c4-470e-896a-24431192c708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80146e6f9ddd8ea497a1f51719bc8aa2d445ccd865882179b4bd7c7d8f074ae4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30ad84e97d1f580d803319fdec883920f79513347adeb9c5df1a4cfa0c3e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T06:33:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrbff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5fcc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.907993 4937 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f394d17-1f72-43ba-8d51-b76e56dd6849\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T06:33:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k294p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T06:33:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7ksbw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:00Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.971394 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.971425 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.971434 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.971448 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:00 crc kubenswrapper[4937]: I0123 06:35:00.971459 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:00Z","lastTransitionTime":"2026-01-23T06:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.074029 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.074110 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.074130 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.074157 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.074176 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:01Z","lastTransitionTime":"2026-01-23T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.176706 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.176756 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.176768 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.176789 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.176806 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:01Z","lastTransitionTime":"2026-01-23T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.280422 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.280495 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.280513 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.280547 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.280564 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:01Z","lastTransitionTime":"2026-01-23T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.383983 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.384058 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.384083 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.384109 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.384127 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:01Z","lastTransitionTime":"2026-01-23T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.486825 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.486876 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.486892 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.486914 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.486931 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:01Z","lastTransitionTime":"2026-01-23T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.538913 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 00:20:30.327416559 +0000 UTC Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.590245 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.590288 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.590305 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.590327 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.590345 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:01Z","lastTransitionTime":"2026-01-23T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.693235 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.693289 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.693306 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.693330 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.693347 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:01Z","lastTransitionTime":"2026-01-23T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.796046 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.796111 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.796134 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.796165 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.796186 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:01Z","lastTransitionTime":"2026-01-23T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.899412 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.899502 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.899532 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.899567 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:01 crc kubenswrapper[4937]: I0123 06:35:01.899638 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:01Z","lastTransitionTime":"2026-01-23T06:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.003143 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.003197 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.003213 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.003239 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.003256 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:02Z","lastTransitionTime":"2026-01-23T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.107914 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.107981 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.107998 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.108029 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.108050 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:02Z","lastTransitionTime":"2026-01-23T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.212123 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.212573 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.212775 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.212950 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.213112 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:02Z","lastTransitionTime":"2026-01-23T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.316010 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.316089 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.316117 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.316151 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.316177 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:02Z","lastTransitionTime":"2026-01-23T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.419346 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.419413 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.419432 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.419458 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.419483 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:02Z","lastTransitionTime":"2026-01-23T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.522990 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.523031 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.523043 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.523061 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.523092 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:02Z","lastTransitionTime":"2026-01-23T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.551456 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 08:07:37.209897276 +0000 UTC Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.551811 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:02 crc kubenswrapper[4937]: E0123 06:35:02.552009 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.552360 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:02 crc kubenswrapper[4937]: E0123 06:35:02.552490 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.552928 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.552950 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:02 crc kubenswrapper[4937]: E0123 06:35:02.553157 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:35:02 crc kubenswrapper[4937]: E0123 06:35:02.553214 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.625923 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.625960 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.625968 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.625982 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.625993 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:02Z","lastTransitionTime":"2026-01-23T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.730175 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.730363 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.730397 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.730425 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.730446 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:02Z","lastTransitionTime":"2026-01-23T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.833427 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.833493 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.833514 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.833537 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.833560 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:02Z","lastTransitionTime":"2026-01-23T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.937178 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.937244 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.937263 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.937290 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:02 crc kubenswrapper[4937]: I0123 06:35:02.937314 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:02Z","lastTransitionTime":"2026-01-23T06:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.040444 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.040516 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.040530 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.040553 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.040568 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:03Z","lastTransitionTime":"2026-01-23T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.144478 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.144564 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.144583 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.144645 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.144674 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:03Z","lastTransitionTime":"2026-01-23T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.258420 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.258480 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.258494 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.258534 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.258549 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:03Z","lastTransitionTime":"2026-01-23T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.362071 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.362128 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.362143 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.362164 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.362182 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:03Z","lastTransitionTime":"2026-01-23T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.465676 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.465750 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.465777 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.465809 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.465832 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:03Z","lastTransitionTime":"2026-01-23T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.552543 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:02:27.987196801 +0000 UTC Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.569161 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.569222 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.569238 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.569266 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.569284 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:03Z","lastTransitionTime":"2026-01-23T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.672651 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.672745 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.672758 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.672779 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.672793 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:03Z","lastTransitionTime":"2026-01-23T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.776567 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.776688 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.776715 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.776752 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.776801 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:03Z","lastTransitionTime":"2026-01-23T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.880489 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.880554 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.880567 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.880613 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.880630 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:03Z","lastTransitionTime":"2026-01-23T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.983886 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.983974 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.983996 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.984024 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:03 crc kubenswrapper[4937]: I0123 06:35:03.984043 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:03Z","lastTransitionTime":"2026-01-23T06:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.087661 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.087740 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.087759 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.087789 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.087809 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:04Z","lastTransitionTime":"2026-01-23T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.191708 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.191828 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.191849 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.191884 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.191902 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:04Z","lastTransitionTime":"2026-01-23T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.295079 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.295163 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.295192 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.295225 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.295249 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:04Z","lastTransitionTime":"2026-01-23T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.400187 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.400283 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.400309 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.400352 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.400378 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:04Z","lastTransitionTime":"2026-01-23T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.503045 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.503112 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.503128 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.503153 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.503171 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:04Z","lastTransitionTime":"2026-01-23T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.525667 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.525753 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:04 crc kubenswrapper[4937]: E0123 06:35:04.525791 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.525671 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.525823 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:04 crc kubenswrapper[4937]: E0123 06:35:04.526116 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:35:04 crc kubenswrapper[4937]: E0123 06:35:04.527225 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:35:04 crc kubenswrapper[4937]: E0123 06:35:04.527352 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.527597 4937 scope.go:117] "RemoveContainer" containerID="4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41" Jan 23 06:35:04 crc kubenswrapper[4937]: E0123 06:35:04.528146 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.553639 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 22:55:54.41273519 +0000 UTC Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.606982 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.607052 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.607077 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.607107 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.607132 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:04Z","lastTransitionTime":"2026-01-23T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.710758 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.710828 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.710841 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.710867 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.710887 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:04Z","lastTransitionTime":"2026-01-23T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.814233 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.814292 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.814310 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.814338 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.814361 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:04Z","lastTransitionTime":"2026-01-23T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.916848 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.916904 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.916915 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.916932 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:04 crc kubenswrapper[4937]: I0123 06:35:04.916943 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:04Z","lastTransitionTime":"2026-01-23T06:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.019478 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.019527 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.019540 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.019557 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.019568 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:05Z","lastTransitionTime":"2026-01-23T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.122354 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.122458 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.122491 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.122523 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.122549 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:05Z","lastTransitionTime":"2026-01-23T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.226350 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.226458 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.226476 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.226504 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.226526 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:05Z","lastTransitionTime":"2026-01-23T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.329955 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.330055 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.330080 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.330115 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.330142 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:05Z","lastTransitionTime":"2026-01-23T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.435897 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.435969 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.435986 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.436015 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.436034 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:05Z","lastTransitionTime":"2026-01-23T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.539233 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.539310 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.539329 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.539361 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.539381 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:05Z","lastTransitionTime":"2026-01-23T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.553720 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 09:39:44.197881574 +0000 UTC Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.642278 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.642339 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.642356 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.642383 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.642409 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:05Z","lastTransitionTime":"2026-01-23T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.745560 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.745690 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.745715 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.745746 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.745767 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:05Z","lastTransitionTime":"2026-01-23T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.849484 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.849553 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.849574 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.849643 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.849672 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:05Z","lastTransitionTime":"2026-01-23T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.958464 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.958566 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.958636 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.958680 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:05 crc kubenswrapper[4937]: I0123 06:35:05.958719 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:05Z","lastTransitionTime":"2026-01-23T06:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.032055 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.032467 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.032561 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.032693 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.032781 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:06Z","lastTransitionTime":"2026-01-23T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:06 crc kubenswrapper[4937]: E0123 06:35:06.058287 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:06Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.067150 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.067218 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.067237 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.067308 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.067327 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:06Z","lastTransitionTime":"2026-01-23T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:06 crc kubenswrapper[4937]: E0123 06:35:06.095286 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:06Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.100886 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.100941 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.100954 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.100975 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.100992 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:06Z","lastTransitionTime":"2026-01-23T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:06 crc kubenswrapper[4937]: E0123 06:35:06.119296 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:06Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.124470 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.124540 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.124557 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.124584 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.124626 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:06Z","lastTransitionTime":"2026-01-23T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:06 crc kubenswrapper[4937]: E0123 06:35:06.143123 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:06Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.147390 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.147438 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.147448 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.147467 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.147483 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:06Z","lastTransitionTime":"2026-01-23T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:06 crc kubenswrapper[4937]: E0123 06:35:06.160214 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T06:35:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a1babb72-84ba-4ca7-966b-7f641e51838d\\\",\\\"systemUUID\\\":\\\"c6f09717-13e3-4c26-b541-e217196b2ab6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T06:35:06Z is after 2025-08-24T17:21:41Z" Jan 23 06:35:06 crc kubenswrapper[4937]: E0123 06:35:06.160356 4937 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.162434 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.162474 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.162490 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.162509 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.162521 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:06Z","lastTransitionTime":"2026-01-23T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.265679 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.265736 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.265748 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.265769 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.265783 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:06Z","lastTransitionTime":"2026-01-23T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.369205 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.369258 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.369269 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.369291 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.369302 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:06Z","lastTransitionTime":"2026-01-23T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.473075 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.473150 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.473164 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.473190 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.473209 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:06Z","lastTransitionTime":"2026-01-23T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.525790 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.526041 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.526095 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.526110 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:06 crc kubenswrapper[4937]: E0123 06:35:06.526371 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:35:06 crc kubenswrapper[4937]: E0123 06:35:06.526565 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:35:06 crc kubenswrapper[4937]: E0123 06:35:06.526788 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:35:06 crc kubenswrapper[4937]: E0123 06:35:06.526909 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.553877 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 08:13:26.758099712 +0000 UTC Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.576110 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.576180 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.576204 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.576228 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.576244 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:06Z","lastTransitionTime":"2026-01-23T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.679372 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.679423 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.679436 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.679457 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.679471 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:06Z","lastTransitionTime":"2026-01-23T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.781719 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.781764 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.781772 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.781790 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.781801 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:06Z","lastTransitionTime":"2026-01-23T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.885580 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.885676 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.885689 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.885711 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.885723 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:06Z","lastTransitionTime":"2026-01-23T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.989527 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.989610 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.989621 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.989639 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:06 crc kubenswrapper[4937]: I0123 06:35:06.989650 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:06Z","lastTransitionTime":"2026-01-23T06:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.092507 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.092583 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.092636 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.092665 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.092683 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:07Z","lastTransitionTime":"2026-01-23T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.195789 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.195861 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.195880 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.195906 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.195925 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:07Z","lastTransitionTime":"2026-01-23T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.299052 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.299096 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.299107 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.299123 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.299134 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:07Z","lastTransitionTime":"2026-01-23T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.401866 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.401900 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.401911 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.401924 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.401933 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:07Z","lastTransitionTime":"2026-01-23T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.505517 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.505648 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.505676 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.505715 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.505741 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:07Z","lastTransitionTime":"2026-01-23T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.555027 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:27:18.169513495 +0000 UTC Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.608398 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.608476 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.608500 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.608549 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.608575 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:07Z","lastTransitionTime":"2026-01-23T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.711810 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.711878 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.711903 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.711935 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.711959 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:07Z","lastTransitionTime":"2026-01-23T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.814901 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.814956 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.814969 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.814992 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.815024 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:07Z","lastTransitionTime":"2026-01-23T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.919854 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.919956 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.919974 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.920001 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:07 crc kubenswrapper[4937]: I0123 06:35:07.920020 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:07Z","lastTransitionTime":"2026-01-23T06:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.022375 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.022427 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.022439 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.022457 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.022471 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:08Z","lastTransitionTime":"2026-01-23T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.126299 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.126367 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.126381 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.126403 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.126417 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:08Z","lastTransitionTime":"2026-01-23T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.229265 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.229403 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.229429 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.229456 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.229474 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:08Z","lastTransitionTime":"2026-01-23T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.332249 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.332297 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.332307 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.332329 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.332340 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:08Z","lastTransitionTime":"2026-01-23T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.436876 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.437043 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.437069 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.437137 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.437166 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:08Z","lastTransitionTime":"2026-01-23T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.525823 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.525953 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.525990 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.525952 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:08 crc kubenswrapper[4937]: E0123 06:35:08.526016 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:35:08 crc kubenswrapper[4937]: E0123 06:35:08.526398 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:35:08 crc kubenswrapper[4937]: E0123 06:35:08.526379 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:35:08 crc kubenswrapper[4937]: E0123 06:35:08.526774 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.539754 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.539913 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.539938 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.539961 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.539980 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:08Z","lastTransitionTime":"2026-01-23T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.555251 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:02:41.892870905 +0000 UTC Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.642955 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.643022 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.643045 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.643078 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.643105 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:08Z","lastTransitionTime":"2026-01-23T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.746246 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.746314 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.746324 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.746345 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.746389 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:08Z","lastTransitionTime":"2026-01-23T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.850002 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.850053 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.850067 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.850093 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.850114 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:08Z","lastTransitionTime":"2026-01-23T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.954025 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.954106 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.954127 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.954154 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:08 crc kubenswrapper[4937]: I0123 06:35:08.954171 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:08Z","lastTransitionTime":"2026-01-23T06:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.057773 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.057838 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.057856 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.057880 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.057898 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:09Z","lastTransitionTime":"2026-01-23T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.161135 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.161208 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.161226 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.161253 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.161275 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:09Z","lastTransitionTime":"2026-01-23T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.263919 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.263982 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.264000 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.264025 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.264044 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:09Z","lastTransitionTime":"2026-01-23T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.367216 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.367295 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.367313 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.367340 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.367359 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:09Z","lastTransitionTime":"2026-01-23T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.471187 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.471266 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.471291 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.471325 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.471349 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:09Z","lastTransitionTime":"2026-01-23T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.556308 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 16:30:07.419049551 +0000 UTC Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.574374 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.574458 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.574475 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.574504 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.574523 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:09Z","lastTransitionTime":"2026-01-23T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.678738 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.678823 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.678857 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.678892 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.678922 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:09Z","lastTransitionTime":"2026-01-23T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.782624 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.782688 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.782708 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.782736 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.782759 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:09Z","lastTransitionTime":"2026-01-23T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.886007 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.886075 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.886100 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.886131 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.886157 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:09Z","lastTransitionTime":"2026-01-23T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.989969 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.990038 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.990057 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.990081 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:09 crc kubenswrapper[4937]: I0123 06:35:09.990097 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:09Z","lastTransitionTime":"2026-01-23T06:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.092987 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.093062 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.093086 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.093117 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.093140 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:10Z","lastTransitionTime":"2026-01-23T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.197147 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.197216 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.197233 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.197261 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.197280 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:10Z","lastTransitionTime":"2026-01-23T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.300661 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.300719 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.300730 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.300752 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.300765 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:10Z","lastTransitionTime":"2026-01-23T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.404667 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.404728 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.404743 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.404767 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.404784 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:10Z","lastTransitionTime":"2026-01-23T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.508449 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.508511 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.508528 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.508553 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.508572 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:10Z","lastTransitionTime":"2026-01-23T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.525278 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:10 crc kubenswrapper[4937]: E0123 06:35:10.525470 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.525483 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.525835 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.526000 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:10 crc kubenswrapper[4937]: E0123 06:35:10.526101 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:35:10 crc kubenswrapper[4937]: E0123 06:35:10.526451 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:35:10 crc kubenswrapper[4937]: E0123 06:35:10.527770 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.556673 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 02:56:13.427050546 +0000 UTC Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.585400 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=90.585359306 podStartE2EDuration="1m30.585359306s" podCreationTimestamp="2026-01-23 06:33:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:10.565270879 +0000 UTC m=+110.369037552" watchObservedRunningTime="2026-01-23 06:35:10.585359306 +0000 UTC m=+110.389126049" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.612162 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.612196 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.612209 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.612220 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.612230 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:10Z","lastTransitionTime":"2026-01-23T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.640102 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-wqqs8" podStartSLOduration=89.640074151 podStartE2EDuration="1m29.640074151s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:10.62414776 +0000 UTC m=+110.427914443" watchObservedRunningTime="2026-01-23 06:35:10.640074151 +0000 UTC m=+110.443840824" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.661869 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5fcc" podStartSLOduration=89.661835723 podStartE2EDuration="1m29.661835723s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:10.640485032 +0000 UTC m=+110.444251705" watchObservedRunningTime="2026-01-23 06:35:10.661835723 +0000 UTC m=+110.465602416" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.702792 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-js46n" podStartSLOduration=89.702755277 podStartE2EDuration="1m29.702755277s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:10.675284076 +0000 UTC m=+110.479050759" watchObservedRunningTime="2026-01-23 06:35:10.702755277 +0000 UTC m=+110.506521970" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.715158 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.715205 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.715215 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.715239 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.715252 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:10Z","lastTransitionTime":"2026-01-23T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.725132 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-dbxzj" podStartSLOduration=89.725113136 podStartE2EDuration="1m29.725113136s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:10.703161909 +0000 UTC m=+110.506928572" watchObservedRunningTime="2026-01-23 06:35:10.725113136 +0000 UTC m=+110.528879829" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.725510 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=42.725498527 podStartE2EDuration="42.725498527s" podCreationTimestamp="2026-01-23 06:34:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:10.724789697 +0000 UTC m=+110.528556360" watchObservedRunningTime="2026-01-23 06:35:10.725498527 +0000 UTC m=+110.529265220" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.756247 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=92.756211778 podStartE2EDuration="1m32.756211778s" podCreationTimestamp="2026-01-23 06:33:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:10.755623951 +0000 UTC m=+110.559390644" watchObservedRunningTime="2026-01-23 06:35:10.756211778 +0000 UTC m=+110.559978471" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.791843 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=90.791822224 podStartE2EDuration="1m30.791822224s" podCreationTimestamp="2026-01-23 06:33:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:10.775667286 +0000 UTC m=+110.579433929" watchObservedRunningTime="2026-01-23 06:35:10.791822224 +0000 UTC m=+110.595588867" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.817446 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.817494 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.817510 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.817526 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.817538 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:10Z","lastTransitionTime":"2026-01-23T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.822297 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podStartSLOduration=89.822273617 podStartE2EDuration="1m29.822273617s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:10.820723824 +0000 UTC m=+110.624490487" watchObservedRunningTime="2026-01-23 06:35:10.822273617 +0000 UTC m=+110.626040280" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.877386 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=60.877358113 podStartE2EDuration="1m0.877358113s" podCreationTimestamp="2026-01-23 06:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:10.876238781 +0000 UTC m=+110.680005434" watchObservedRunningTime="2026-01-23 06:35:10.877358113 +0000 UTC m=+110.681124766" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.910297 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-bhj54" podStartSLOduration=89.910274465 podStartE2EDuration="1m29.910274465s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:10.909303827 +0000 UTC m=+110.713070470" watchObservedRunningTime="2026-01-23 06:35:10.910274465 +0000 UTC m=+110.714041108" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.920292 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.920467 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.920566 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.920666 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:10 crc kubenswrapper[4937]: I0123 06:35:10.920739 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:10Z","lastTransitionTime":"2026-01-23T06:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.023723 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.023765 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.023774 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.023790 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.023800 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:11Z","lastTransitionTime":"2026-01-23T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.127534 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.127639 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.127660 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.127689 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.127712 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:11Z","lastTransitionTime":"2026-01-23T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.234332 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.234395 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.234414 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.234441 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.234462 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:11Z","lastTransitionTime":"2026-01-23T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.338459 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.338516 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.338534 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.338558 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.338575 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:11Z","lastTransitionTime":"2026-01-23T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.442709 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.443365 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.443384 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.443413 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.443434 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:11Z","lastTransitionTime":"2026-01-23T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.546304 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.546379 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.546404 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.546435 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.546462 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:11Z","lastTransitionTime":"2026-01-23T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.557921 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 03:01:15.159087613 +0000 UTC Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.650712 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.650757 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.650766 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.650781 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.650793 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:11Z","lastTransitionTime":"2026-01-23T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.753873 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.753924 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.753934 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.753954 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.753967 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:11Z","lastTransitionTime":"2026-01-23T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.857445 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.857510 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.857527 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.857553 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.857572 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:11Z","lastTransitionTime":"2026-01-23T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.960656 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.960718 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.960734 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.960756 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:11 crc kubenswrapper[4937]: I0123 06:35:11.960776 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:11Z","lastTransitionTime":"2026-01-23T06:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.064390 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.064468 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.064493 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.064526 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.064554 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:12Z","lastTransitionTime":"2026-01-23T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.167180 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.167248 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.167270 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.167300 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.167322 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:12Z","lastTransitionTime":"2026-01-23T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.269255 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.269295 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.269305 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.269318 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.269327 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:12Z","lastTransitionTime":"2026-01-23T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.372312 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.372377 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.372389 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.372411 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.372423 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:12Z","lastTransitionTime":"2026-01-23T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.475478 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.475571 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.475632 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.475668 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.475701 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:12Z","lastTransitionTime":"2026-01-23T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.526253 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.526327 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.526357 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:12 crc kubenswrapper[4937]: E0123 06:35:12.526487 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:35:12 crc kubenswrapper[4937]: E0123 06:35:12.526611 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:35:12 crc kubenswrapper[4937]: E0123 06:35:12.526881 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.527277 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:12 crc kubenswrapper[4937]: E0123 06:35:12.527499 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.558523 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 04:26:53.717880194 +0000 UTC Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.578783 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.578835 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.578847 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.578870 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.578899 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:12Z","lastTransitionTime":"2026-01-23T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.683261 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.683336 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.683361 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.683393 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.683415 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:12Z","lastTransitionTime":"2026-01-23T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.786971 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.787090 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.787168 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.787208 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.787283 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:12Z","lastTransitionTime":"2026-01-23T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.890037 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.890097 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.890113 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.890138 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.890158 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:12Z","lastTransitionTime":"2026-01-23T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.993157 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.993230 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.993263 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.993294 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:12 crc kubenswrapper[4937]: I0123 06:35:12.993313 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:12Z","lastTransitionTime":"2026-01-23T06:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.097020 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.097078 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.097093 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.097113 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.097156 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:13Z","lastTransitionTime":"2026-01-23T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.200444 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.200509 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.200527 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.200556 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.200646 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:13Z","lastTransitionTime":"2026-01-23T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.303302 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.303353 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.303368 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.303389 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.303403 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:13Z","lastTransitionTime":"2026-01-23T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.406650 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.406716 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.406737 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.406765 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.406784 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:13Z","lastTransitionTime":"2026-01-23T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.510725 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.510788 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.510800 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.510822 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.510835 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:13Z","lastTransitionTime":"2026-01-23T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.559102 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 20:22:15.32119369 +0000 UTC Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.614743 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.614802 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.614820 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.614845 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.614862 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:13Z","lastTransitionTime":"2026-01-23T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.718189 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.718262 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.718281 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.718315 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.718337 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:13Z","lastTransitionTime":"2026-01-23T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.821521 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.821678 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.821704 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.821744 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.821771 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:13Z","lastTransitionTime":"2026-01-23T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.925728 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.925799 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.925817 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.925843 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:13 crc kubenswrapper[4937]: I0123 06:35:13.925862 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:13Z","lastTransitionTime":"2026-01-23T06:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.030012 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.030084 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.030103 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.030132 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.030152 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:14Z","lastTransitionTime":"2026-01-23T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.133340 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.133406 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.133490 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.133516 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.133535 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:14Z","lastTransitionTime":"2026-01-23T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.237096 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.237772 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.237795 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.237823 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.237836 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:14Z","lastTransitionTime":"2026-01-23T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.341561 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.341668 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.341691 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.341718 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.341739 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:14Z","lastTransitionTime":"2026-01-23T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.444976 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.445052 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.445070 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.445102 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.445126 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:14Z","lastTransitionTime":"2026-01-23T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.525442 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.525520 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.525625 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:14 crc kubenswrapper[4937]: E0123 06:35:14.525779 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.525804 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:14 crc kubenswrapper[4937]: E0123 06:35:14.526060 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:35:14 crc kubenswrapper[4937]: E0123 06:35:14.526114 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:35:14 crc kubenswrapper[4937]: E0123 06:35:14.526228 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.548639 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.548685 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.548704 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.548726 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.548745 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:14Z","lastTransitionTime":"2026-01-23T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.559819 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 01:24:52.746938771 +0000 UTC Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.652368 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.652432 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.652446 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.652473 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.652490 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:14Z","lastTransitionTime":"2026-01-23T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.756940 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.757021 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.757038 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.757063 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.757083 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:14Z","lastTransitionTime":"2026-01-23T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.884701 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.884752 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.884791 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.884809 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.884822 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:14Z","lastTransitionTime":"2026-01-23T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.989107 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.989157 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.989169 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.989187 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:14 crc kubenswrapper[4937]: I0123 06:35:14.989198 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:14Z","lastTransitionTime":"2026-01-23T06:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.093047 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.093102 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.093111 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.093127 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.093140 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:15Z","lastTransitionTime":"2026-01-23T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.196662 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.196714 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.196728 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.196754 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.196778 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:15Z","lastTransitionTime":"2026-01-23T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.300405 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.300983 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.300996 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.301017 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.301034 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:15Z","lastTransitionTime":"2026-01-23T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.404234 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.404300 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.404312 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.404337 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.404352 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:15Z","lastTransitionTime":"2026-01-23T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.508043 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.508124 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.508138 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.508163 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.508198 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:15Z","lastTransitionTime":"2026-01-23T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.560969 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 13:07:17.088180511 +0000 UTC Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.611281 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.611337 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.611351 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.611370 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.611384 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:15Z","lastTransitionTime":"2026-01-23T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.714371 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.714416 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.714426 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.714442 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.714452 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:15Z","lastTransitionTime":"2026-01-23T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.818023 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.818103 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.818122 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.818152 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.818171 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:15Z","lastTransitionTime":"2026-01-23T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.921483 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.921544 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.921582 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.921628 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:15 crc kubenswrapper[4937]: I0123 06:35:15.921643 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:15Z","lastTransitionTime":"2026-01-23T06:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.025319 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.025375 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.025387 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.025410 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.025426 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:16Z","lastTransitionTime":"2026-01-23T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.128651 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.128721 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.128737 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.128756 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.128791 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:16Z","lastTransitionTime":"2026-01-23T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.232199 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.232243 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.232253 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.232267 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.232279 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:16Z","lastTransitionTime":"2026-01-23T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.283386 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhj54_ddcbbc37-6ac2-41e5-a7ea-04de9284c50a/kube-multus/1.log" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.284538 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhj54_ddcbbc37-6ac2-41e5-a7ea-04de9284c50a/kube-multus/0.log" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.284652 4937 generic.go:334] "Generic (PLEG): container finished" podID="ddcbbc37-6ac2-41e5-a7ea-04de9284c50a" containerID="46a429144d0552298786ee2b19d9340a29a8558aefcf75a274c2c3e3bbab20ed" exitCode=1 Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.284707 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bhj54" event={"ID":"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a","Type":"ContainerDied","Data":"46a429144d0552298786ee2b19d9340a29a8558aefcf75a274c2c3e3bbab20ed"} Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.284765 4937 scope.go:117] "RemoveContainer" containerID="755b4d577f573b6d4a4433332d245354860ce7571c3fde18c2a0b8a870b42753" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.285478 4937 scope.go:117] "RemoveContainer" containerID="46a429144d0552298786ee2b19d9340a29a8558aefcf75a274c2c3e3bbab20ed" Jan 23 06:35:16 crc kubenswrapper[4937]: E0123 06:35:16.285906 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-bhj54_openshift-multus(ddcbbc37-6ac2-41e5-a7ea-04de9284c50a)\"" pod="openshift-multus/multus-bhj54" podUID="ddcbbc37-6ac2-41e5-a7ea-04de9284c50a" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.304381 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.304435 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.304447 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.304472 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.304489 4937 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T06:35:16Z","lastTransitionTime":"2026-01-23T06:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.367788 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69"] Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.368493 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.372064 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.372246 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.374126 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.374204 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.465717 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71147807-1752-478c-9163-9203007b71eb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xvc69\" (UID: \"71147807-1752-478c-9163-9203007b71eb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.465869 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/71147807-1752-478c-9163-9203007b71eb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xvc69\" (UID: \"71147807-1752-478c-9163-9203007b71eb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.465963 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/71147807-1752-478c-9163-9203007b71eb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xvc69\" (UID: \"71147807-1752-478c-9163-9203007b71eb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.466013 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71147807-1752-478c-9163-9203007b71eb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xvc69\" (UID: \"71147807-1752-478c-9163-9203007b71eb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.466167 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71147807-1752-478c-9163-9203007b71eb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xvc69\" (UID: \"71147807-1752-478c-9163-9203007b71eb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.526221 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.526357 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.526359 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:16 crc kubenswrapper[4937]: E0123 06:35:16.526541 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.526678 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:16 crc kubenswrapper[4937]: E0123 06:35:16.526888 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:35:16 crc kubenswrapper[4937]: E0123 06:35:16.527104 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:35:16 crc kubenswrapper[4937]: E0123 06:35:16.527317 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.561673 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 05:36:33.447990062 +0000 UTC Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.561763 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.567842 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71147807-1752-478c-9163-9203007b71eb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xvc69\" (UID: \"71147807-1752-478c-9163-9203007b71eb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.567952 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/71147807-1752-478c-9163-9203007b71eb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xvc69\" (UID: \"71147807-1752-478c-9163-9203007b71eb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.568053 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/71147807-1752-478c-9163-9203007b71eb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xvc69\" (UID: \"71147807-1752-478c-9163-9203007b71eb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.568115 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71147807-1752-478c-9163-9203007b71eb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xvc69\" (UID: \"71147807-1752-478c-9163-9203007b71eb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.568191 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71147807-1752-478c-9163-9203007b71eb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xvc69\" (UID: \"71147807-1752-478c-9163-9203007b71eb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.568221 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/71147807-1752-478c-9163-9203007b71eb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xvc69\" (UID: \"71147807-1752-478c-9163-9203007b71eb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.568324 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/71147807-1752-478c-9163-9203007b71eb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xvc69\" (UID: \"71147807-1752-478c-9163-9203007b71eb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.569923 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71147807-1752-478c-9163-9203007b71eb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xvc69\" (UID: \"71147807-1752-478c-9163-9203007b71eb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.575036 4937 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.575647 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71147807-1752-478c-9163-9203007b71eb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xvc69\" (UID: \"71147807-1752-478c-9163-9203007b71eb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.591475 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71147807-1752-478c-9163-9203007b71eb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xvc69\" (UID: \"71147807-1752-478c-9163-9203007b71eb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" Jan 23 06:35:16 crc kubenswrapper[4937]: I0123 06:35:16.682740 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" Jan 23 06:35:17 crc kubenswrapper[4937]: I0123 06:35:17.291627 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" event={"ID":"71147807-1752-478c-9163-9203007b71eb","Type":"ContainerStarted","Data":"e4bad8c47a548cd7e760deb3b7da56d0669cd7b15b4afa559657d8094e568e84"} Jan 23 06:35:17 crc kubenswrapper[4937]: I0123 06:35:17.291709 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" event={"ID":"71147807-1752-478c-9163-9203007b71eb","Type":"ContainerStarted","Data":"289765caab3b2d7f374ed8b661b52b7cc99d6efe300f742882592bece6ca3a88"} Jan 23 06:35:17 crc kubenswrapper[4937]: I0123 06:35:17.293891 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhj54_ddcbbc37-6ac2-41e5-a7ea-04de9284c50a/kube-multus/1.log" Jan 23 06:35:17 crc kubenswrapper[4937]: I0123 06:35:17.315837 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xvc69" podStartSLOduration=96.315803037 podStartE2EDuration="1m36.315803037s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:17.315427147 +0000 UTC m=+117.119193840" watchObservedRunningTime="2026-01-23 06:35:17.315803037 +0000 UTC m=+117.119569730" Jan 23 06:35:17 crc kubenswrapper[4937]: I0123 06:35:17.527265 4937 scope.go:117] "RemoveContainer" containerID="4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41" Jan 23 06:35:17 crc kubenswrapper[4937]: E0123 06:35:17.527564 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hqgs9_openshift-ovn-kubernetes(8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" Jan 23 06:35:18 crc kubenswrapper[4937]: I0123 06:35:18.525382 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:18 crc kubenswrapper[4937]: I0123 06:35:18.525481 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:18 crc kubenswrapper[4937]: I0123 06:35:18.525653 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:18 crc kubenswrapper[4937]: E0123 06:35:18.525650 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:35:18 crc kubenswrapper[4937]: E0123 06:35:18.525853 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:35:18 crc kubenswrapper[4937]: E0123 06:35:18.526007 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:35:18 crc kubenswrapper[4937]: I0123 06:35:18.526917 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:18 crc kubenswrapper[4937]: E0123 06:35:18.527064 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:35:20 crc kubenswrapper[4937]: E0123 06:35:20.476090 4937 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 23 06:35:20 crc kubenswrapper[4937]: I0123 06:35:20.525677 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:20 crc kubenswrapper[4937]: I0123 06:35:20.525814 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:20 crc kubenswrapper[4937]: I0123 06:35:20.525837 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:20 crc kubenswrapper[4937]: E0123 06:35:20.527882 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:35:20 crc kubenswrapper[4937]: I0123 06:35:20.527906 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:20 crc kubenswrapper[4937]: E0123 06:35:20.528042 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:35:20 crc kubenswrapper[4937]: E0123 06:35:20.528261 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:35:20 crc kubenswrapper[4937]: E0123 06:35:20.528353 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:35:20 crc kubenswrapper[4937]: E0123 06:35:20.673469 4937 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 06:35:22 crc kubenswrapper[4937]: I0123 06:35:22.525478 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:22 crc kubenswrapper[4937]: I0123 06:35:22.525536 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:22 crc kubenswrapper[4937]: E0123 06:35:22.525724 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:35:22 crc kubenswrapper[4937]: I0123 06:35:22.525778 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:22 crc kubenswrapper[4937]: E0123 06:35:22.525886 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:35:22 crc kubenswrapper[4937]: E0123 06:35:22.525996 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:35:22 crc kubenswrapper[4937]: I0123 06:35:22.526325 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:22 crc kubenswrapper[4937]: E0123 06:35:22.526462 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:35:24 crc kubenswrapper[4937]: I0123 06:35:24.525474 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:24 crc kubenswrapper[4937]: I0123 06:35:24.525496 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:24 crc kubenswrapper[4937]: E0123 06:35:24.525755 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:35:24 crc kubenswrapper[4937]: I0123 06:35:24.525494 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:24 crc kubenswrapper[4937]: E0123 06:35:24.525877 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:35:24 crc kubenswrapper[4937]: E0123 06:35:24.526183 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:35:24 crc kubenswrapper[4937]: I0123 06:35:24.526403 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:24 crc kubenswrapper[4937]: E0123 06:35:24.526548 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:35:25 crc kubenswrapper[4937]: E0123 06:35:25.675691 4937 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 06:35:26 crc kubenswrapper[4937]: I0123 06:35:26.525869 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:26 crc kubenswrapper[4937]: I0123 06:35:26.525950 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:26 crc kubenswrapper[4937]: E0123 06:35:26.526125 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:35:26 crc kubenswrapper[4937]: I0123 06:35:26.525890 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:26 crc kubenswrapper[4937]: I0123 06:35:26.526242 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:26 crc kubenswrapper[4937]: E0123 06:35:26.526309 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:35:26 crc kubenswrapper[4937]: E0123 06:35:26.526451 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:35:26 crc kubenswrapper[4937]: E0123 06:35:26.526545 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:35:28 crc kubenswrapper[4937]: I0123 06:35:28.525868 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:28 crc kubenswrapper[4937]: I0123 06:35:28.525981 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:28 crc kubenswrapper[4937]: I0123 06:35:28.526051 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:28 crc kubenswrapper[4937]: I0123 06:35:28.526072 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:28 crc kubenswrapper[4937]: I0123 06:35:28.526218 4937 scope.go:117] "RemoveContainer" containerID="46a429144d0552298786ee2b19d9340a29a8558aefcf75a274c2c3e3bbab20ed" Jan 23 06:35:28 crc kubenswrapper[4937]: E0123 06:35:28.526207 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:35:28 crc kubenswrapper[4937]: E0123 06:35:28.526524 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:35:28 crc kubenswrapper[4937]: E0123 06:35:28.527110 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:35:28 crc kubenswrapper[4937]: E0123 06:35:28.527347 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:35:28 crc kubenswrapper[4937]: I0123 06:35:28.527537 4937 scope.go:117] "RemoveContainer" containerID="4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41" Jan 23 06:35:29 crc kubenswrapper[4937]: I0123 06:35:29.349303 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovnkube-controller/3.log" Jan 23 06:35:29 crc kubenswrapper[4937]: I0123 06:35:29.353015 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerStarted","Data":"5853272001fb9ba14897e9ac001b2ecb67428fb7e562c2303a245dacb8133b9f"} Jan 23 06:35:29 crc kubenswrapper[4937]: I0123 06:35:29.353451 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:35:29 crc kubenswrapper[4937]: I0123 06:35:29.355978 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhj54_ddcbbc37-6ac2-41e5-a7ea-04de9284c50a/kube-multus/1.log" Jan 23 06:35:29 crc kubenswrapper[4937]: I0123 06:35:29.356038 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bhj54" event={"ID":"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a","Type":"ContainerStarted","Data":"a16dcccd6ca28bd3f08d1217478481827c25c3bc2e24679aa2d2ce901794a61b"} Jan 23 06:35:29 crc kubenswrapper[4937]: I0123 06:35:29.387073 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" podStartSLOduration=108.387049926 podStartE2EDuration="1m48.387049926s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:29.386051088 +0000 UTC m=+129.189817831" watchObservedRunningTime="2026-01-23 06:35:29.387049926 +0000 UTC m=+129.190816569" Jan 23 06:35:29 crc kubenswrapper[4937]: I0123 06:35:29.555434 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7ksbw"] Jan 23 06:35:29 crc kubenswrapper[4937]: I0123 06:35:29.555566 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:29 crc kubenswrapper[4937]: E0123 06:35:29.555694 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:35:30 crc kubenswrapper[4937]: I0123 06:35:30.526069 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:30 crc kubenswrapper[4937]: I0123 06:35:30.526172 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:30 crc kubenswrapper[4937]: E0123 06:35:30.528416 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:35:30 crc kubenswrapper[4937]: I0123 06:35:30.528468 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:30 crc kubenswrapper[4937]: E0123 06:35:30.528758 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:35:30 crc kubenswrapper[4937]: E0123 06:35:30.528893 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:35:30 crc kubenswrapper[4937]: E0123 06:35:30.676850 4937 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 06:35:31 crc kubenswrapper[4937]: I0123 06:35:31.526261 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:31 crc kubenswrapper[4937]: E0123 06:35:31.527014 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:35:32 crc kubenswrapper[4937]: I0123 06:35:32.530045 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:32 crc kubenswrapper[4937]: E0123 06:35:32.530207 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:35:32 crc kubenswrapper[4937]: I0123 06:35:32.530470 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:32 crc kubenswrapper[4937]: E0123 06:35:32.530555 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:35:32 crc kubenswrapper[4937]: I0123 06:35:32.530784 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:32 crc kubenswrapper[4937]: E0123 06:35:32.530872 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:35:33 crc kubenswrapper[4937]: I0123 06:35:33.526025 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:33 crc kubenswrapper[4937]: E0123 06:35:33.526209 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:35:34 crc kubenswrapper[4937]: I0123 06:35:34.526190 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:34 crc kubenswrapper[4937]: I0123 06:35:34.526301 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:34 crc kubenswrapper[4937]: E0123 06:35:34.526474 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 06:35:34 crc kubenswrapper[4937]: I0123 06:35:34.526512 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:34 crc kubenswrapper[4937]: E0123 06:35:34.526724 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 06:35:34 crc kubenswrapper[4937]: E0123 06:35:34.526934 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 06:35:35 crc kubenswrapper[4937]: I0123 06:35:35.526003 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:35 crc kubenswrapper[4937]: E0123 06:35:35.526794 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7ksbw" podUID="5f394d17-1f72-43ba-8d51-b76e56dd6849" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.525799 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.525826 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.526028 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.530724 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.530860 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.532092 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.533326 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.821411 4937 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.881746 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ss7mm"] Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.882451 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wf8j9"] Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.883134 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wf8j9" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.883866 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ss7mm" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.887879 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g9qqj"] Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.888788 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-c9fhv"] Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.889027 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.891161 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.892559 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.893287 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.893510 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.893508 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.893708 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.893788 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.893876 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.893985 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-z7zfr"] Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.894090 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.894323 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.894528 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.894831 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7zfr" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.896675 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ffdpb"] Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.897511 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.899440 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.899744 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.900007 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.900224 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.901502 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.901780 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.902377 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.913775 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.914404 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.914759 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.914779 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.914981 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.915080 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.915110 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-6g59k"] Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.915261 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.915420 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.915847 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6g59k" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.916489 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rsth7"] Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.916886 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.917292 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.919048 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.922444 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.924812 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.925252 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.925495 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.926054 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.941834 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx"] Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.950233 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7"] Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.960156 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.960508 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.960739 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.960998 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.961293 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.962160 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qztwb"] Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.962511 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.963043 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4"] Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.963259 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.963915 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.964403 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qztwb" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.966038 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-6n269"] Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.966721 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.973267 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.973267 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.974033 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.974045 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.974280 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.974350 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.974222 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.974531 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.974736 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.974929 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88"] Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.975128 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.975807 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.976132 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vn8kx"] Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.976714 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.976835 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.976922 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.977078 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.977168 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.977240 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.977493 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.977554 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.977638 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.977711 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.977833 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.977847 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.977938 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.977951 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.977959 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.978116 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.978218 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.978314 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.978415 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.978509 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.978573 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.978617 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.978680 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.978826 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.978872 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.978877 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.978945 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.979082 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.979098 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.979731 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.980371 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.983807 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r7hgr"] Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.984450 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-62pqr"] Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.984737 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vn8kx" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.984969 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.985113 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.985792 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.986382 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.986437 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.986436 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r7hgr" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.986506 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.986655 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.987508 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.990984 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 06:35:36 crc kubenswrapper[4937]: I0123 06:35:36.994730 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.018561 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.021331 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.021334 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.021816 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.022255 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lsjwm"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.034607 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.034971 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.035944 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.036411 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.036932 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.037149 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.037313 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.036941 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.037651 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038012 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4867dee-6837-4bb3-b7c4-18d9842a005a-audit-policies\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038048 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qztwb\" (UID: \"ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qztwb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038076 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038101 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gszbc\" (UniqueName: \"kubernetes.io/projected/ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a-kube-api-access-gszbc\") pod \"openshift-apiserver-operator-796bbdcf4f-qztwb\" (UID: \"ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qztwb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038118 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038136 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038154 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863e1fad-048e-4104-aa38-ca05ffec260a-serving-cert\") pod \"route-controller-manager-6576b87f9c-ds4g7\" (UID: \"863e1fad-048e-4104-aa38-ca05ffec260a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038172 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/30f2f2b2-8cb3-47cf-a066-87ddfdd40201-available-featuregates\") pod \"openshift-config-operator-7777fb866f-f2dl4\" (UID: \"30f2f2b2-8cb3-47cf-a066-87ddfdd40201\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038210 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d59eeb02-0c89-4608-98c6-78a5b88cdd5c-images\") pod \"machine-api-operator-5694c8668f-wf8j9\" (UID: \"d59eeb02-0c89-4608-98c6-78a5b88cdd5c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wf8j9" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038227 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-trusted-ca-bundle\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038246 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qztwb\" (UID: \"ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qztwb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038260 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-console-config\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038287 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-oauth-serving-cert\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038306 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4867dee-6837-4bb3-b7c4-18d9842a005a-etcd-client\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038322 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c4867dee-6837-4bb3-b7c4-18d9842a005a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038341 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7be36b21-cbe7-4374-a796-8974ac58d8ed-machine-approver-tls\") pod \"machine-approver-56656f9798-z7zfr\" (UID: \"7be36b21-cbe7-4374-a796-8974ac58d8ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7zfr" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038357 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038372 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038389 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bda434a-853e-4281-9e1b-1d79f81f6856-serving-cert\") pod \"controller-manager-879f6c89f-g9qqj\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038406 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/066984c2-78f9-456c-b263-5f108f7be481-service-ca-bundle\") pod \"authentication-operator-69f744f599-ffdpb\" (UID: \"066984c2-78f9-456c-b263-5f108f7be481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038423 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d422a1a4-490a-4905-9093-fd39409c55b7-config\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038437 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d422a1a4-490a-4905-9093-fd39409c55b7-image-import-ca\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038454 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d422a1a4-490a-4905-9093-fd39409c55b7-audit-dir\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038470 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4867dee-6837-4bb3-b7c4-18d9842a005a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038488 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhqjd\" (UniqueName: \"kubernetes.io/projected/a88c49d5-e615-4c41-972e-3a0ddcadfd53-kube-api-access-zhqjd\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038505 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bda434a-853e-4281-9e1b-1d79f81f6856-config\") pod \"controller-manager-879f6c89f-g9qqj\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038522 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjdjf\" (UniqueName: \"kubernetes.io/projected/066984c2-78f9-456c-b263-5f108f7be481-kube-api-access-hjdjf\") pod \"authentication-operator-69f744f599-ffdpb\" (UID: \"066984c2-78f9-456c-b263-5f108f7be481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038559 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d422a1a4-490a-4905-9093-fd39409c55b7-etcd-client\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038575 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d422a1a4-490a-4905-9093-fd39409c55b7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038611 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d422a1a4-490a-4905-9093-fd39409c55b7-audit\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038627 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863e1fad-048e-4104-aa38-ca05ffec260a-config\") pod \"route-controller-manager-6576b87f9c-ds4g7\" (UID: \"863e1fad-048e-4104-aa38-ca05ffec260a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038642 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlrgl\" (UniqueName: \"kubernetes.io/projected/d59eeb02-0c89-4608-98c6-78a5b88cdd5c-kube-api-access-wlrgl\") pod \"machine-api-operator-5694c8668f-wf8j9\" (UID: \"d59eeb02-0c89-4608-98c6-78a5b88cdd5c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wf8j9" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038665 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038681 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d422a1a4-490a-4905-9093-fd39409c55b7-node-pullsecrets\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038708 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30f2f2b2-8cb3-47cf-a066-87ddfdd40201-serving-cert\") pod \"openshift-config-operator-7777fb866f-f2dl4\" (UID: \"30f2f2b2-8cb3-47cf-a066-87ddfdd40201\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038744 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvlbg\" (UniqueName: \"kubernetes.io/projected/30f2f2b2-8cb3-47cf-a066-87ddfdd40201-kube-api-access-nvlbg\") pod \"openshift-config-operator-7777fb866f-f2dl4\" (UID: \"30f2f2b2-8cb3-47cf-a066-87ddfdd40201\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038761 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4867dee-6837-4bb3-b7c4-18d9842a005a-serving-cert\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038778 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5swlc\" (UniqueName: \"kubernetes.io/projected/57c396e4-f445-46c6-b636-cfe96d07cc43-kube-api-access-5swlc\") pod \"downloads-7954f5f757-6g59k\" (UID: \"57c396e4-f445-46c6-b636-cfe96d07cc43\") " pod="openshift-console/downloads-7954f5f757-6g59k" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038796 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038814 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d422a1a4-490a-4905-9093-fd39409c55b7-etcd-serving-ca\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038856 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhd65\" (UniqueName: \"kubernetes.io/projected/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-kube-api-access-jhd65\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038872 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-service-ca\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038911 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6zjc\" (UniqueName: \"kubernetes.io/projected/7be36b21-cbe7-4374-a796-8974ac58d8ed-kube-api-access-m6zjc\") pod \"machine-approver-56656f9798-z7zfr\" (UID: \"7be36b21-cbe7-4374-a796-8974ac58d8ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7zfr" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038928 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnkps\" (UniqueName: \"kubernetes.io/projected/1b53c885-35c8-44b5-87d8-be092079dd0d-kube-api-access-fnkps\") pod \"cluster-image-registry-operator-dc59b4c8b-56z88\" (UID: \"1b53c885-35c8-44b5-87d8-be092079dd0d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038947 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038963 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c4867dee-6837-4bb3-b7c4-18d9842a005a-encryption-config\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.038989 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4867dee-6837-4bb3-b7c4-18d9842a005a-audit-dir\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.039010 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/863e1fad-048e-4104-aa38-ca05ffec260a-client-ca\") pod \"route-controller-manager-6576b87f9c-ds4g7\" (UID: \"863e1fad-048e-4104-aa38-ca05ffec260a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.039027 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b53c885-35c8-44b5-87d8-be092079dd0d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-56z88\" (UID: \"1b53c885-35c8-44b5-87d8-be092079dd0d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.039043 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/066984c2-78f9-456c-b263-5f108f7be481-serving-cert\") pod \"authentication-operator-69f744f599-ffdpb\" (UID: \"066984c2-78f9-456c-b263-5f108f7be481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.039060 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.039077 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hn6p\" (UniqueName: \"kubernetes.io/projected/863e1fad-048e-4104-aa38-ca05ffec260a-kube-api-access-2hn6p\") pod \"route-controller-manager-6576b87f9c-ds4g7\" (UID: \"863e1fad-048e-4104-aa38-ca05ffec260a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.039092 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d422a1a4-490a-4905-9093-fd39409c55b7-encryption-config\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.039111 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b53c885-35c8-44b5-87d8-be092079dd0d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-56z88\" (UID: \"1b53c885-35c8-44b5-87d8-be092079dd0d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.039131 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/066984c2-78f9-456c-b263-5f108f7be481-config\") pod \"authentication-operator-69f744f599-ffdpb\" (UID: \"066984c2-78f9-456c-b263-5f108f7be481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.039134 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.040106 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.040664 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.041134 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.041276 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z2nn6"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.041631 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5hmd"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042058 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5hmd" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.039148 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042272 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2t75"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042323 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0a141fd-dece-4e1c-8cb3-794c0b3f6b21-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ss7mm\" (UID: \"d0a141fd-dece-4e1c-8cb3-794c0b3f6b21\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ss7mm" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042369 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be36b21-cbe7-4374-a796-8974ac58d8ed-config\") pod \"machine-approver-56656f9798-z7zfr\" (UID: \"7be36b21-cbe7-4374-a796-8974ac58d8ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7zfr" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042390 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d59eeb02-0c89-4608-98c6-78a5b88cdd5c-config\") pod \"machine-api-operator-5694c8668f-wf8j9\" (UID: \"d59eeb02-0c89-4608-98c6-78a5b88cdd5c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wf8j9" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042425 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bda434a-853e-4281-9e1b-1d79f81f6856-client-ca\") pod \"controller-manager-879f6c89f-g9qqj\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042457 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d59eeb02-0c89-4608-98c6-78a5b88cdd5c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wf8j9\" (UID: \"d59eeb02-0c89-4608-98c6-78a5b88cdd5c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wf8j9" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042501 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxwx2\" (UniqueName: \"kubernetes.io/projected/8bda434a-853e-4281-9e1b-1d79f81f6856-kube-api-access-pxwx2\") pod \"controller-manager-879f6c89f-g9qqj\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042530 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b53c885-35c8-44b5-87d8-be092079dd0d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-56z88\" (UID: \"1b53c885-35c8-44b5-87d8-be092079dd0d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042547 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d422a1a4-490a-4905-9093-fd39409c55b7-serving-cert\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042565 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a88c49d5-e615-4c41-972e-3a0ddcadfd53-console-serving-cert\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042585 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-audit-dir\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042638 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042655 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp9jz\" (UniqueName: \"kubernetes.io/projected/d0a141fd-dece-4e1c-8cb3-794c0b3f6b21-kube-api-access-rp9jz\") pod \"cluster-samples-operator-665b6dd947-ss7mm\" (UID: \"d0a141fd-dece-4e1c-8cb3-794c0b3f6b21\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ss7mm" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042673 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bda434a-853e-4281-9e1b-1d79f81f6856-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-g9qqj\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042692 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/066984c2-78f9-456c-b263-5f108f7be481-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ffdpb\" (UID: \"066984c2-78f9-456c-b263-5f108f7be481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042712 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-audit-policies\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042778 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7be36b21-cbe7-4374-a796-8974ac58d8ed-auth-proxy-config\") pod \"machine-approver-56656f9798-z7zfr\" (UID: \"7be36b21-cbe7-4374-a796-8974ac58d8ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7zfr" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042802 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnb98\" (UniqueName: \"kubernetes.io/projected/d422a1a4-490a-4905-9093-fd39409c55b7-kube-api-access-dnb98\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042819 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjkd4\" (UniqueName: \"kubernetes.io/projected/c4867dee-6837-4bb3-b7c4-18d9842a005a-kube-api-access-mjkd4\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.042836 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a88c49d5-e615-4c41-972e-3a0ddcadfd53-console-oauth-config\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.043730 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z2nn6" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.043772 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.044232 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kj2c4"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.044250 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.045626 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.047338 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2t75" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.047394 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6bsb"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.046268 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.047570 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.049952 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dvjv9"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.050435 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ss7mm"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.050468 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mvhd8"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.050894 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.051193 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6bsb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.051286 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-dvjv9" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.051362 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2h4rf"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.047482 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.052059 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wf8j9"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.052110 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mvhd8" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.052138 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.052182 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2h4rf" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.052418 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bbtnp"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.052970 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bbtnp" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.053162 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzkz5"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.053475 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzkz5" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.055911 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.056629 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.056621 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-62bgj"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.065007 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g9qqj"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.065066 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.065966 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-62bgj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.066091 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.066879 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6g59k"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.067845 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.068825 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.069136 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-c9fhv"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.071388 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7xmf5"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.072058 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wg85h"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.072412 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-n9444"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.073678 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.073769 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-x2zhr"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.074963 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x2zhr" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.073809 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.073781 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n9444" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.073836 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wg85h" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.089090 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.095252 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-qzw6p"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.097862 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6n269"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.098170 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qzw6p" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.100207 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ffdpb"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.101914 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qztwb"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.104519 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.105993 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.106205 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lsjwm"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.108147 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.114547 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rsth7"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.115386 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dvjv9"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.116835 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vn8kx"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.118357 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mvhd8"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.119571 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2t75"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.120697 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.122028 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.126897 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.129740 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-62bgj"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.131227 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzkz5"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.132310 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.133769 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7xmf5"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.134939 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8twcn"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.136218 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8twcn" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.136735 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8d4jd"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.137967 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6bsb"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.138115 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.139760 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5hmd"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.140921 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qzw6p"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.141997 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.143641 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.144609 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5swlc\" (UniqueName: \"kubernetes.io/projected/57c396e4-f445-46c6-b636-cfe96d07cc43-kube-api-access-5swlc\") pod \"downloads-7954f5f757-6g59k\" (UID: \"57c396e4-f445-46c6-b636-cfe96d07cc43\") " pod="openshift-console/downloads-7954f5f757-6g59k" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.144644 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.144670 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d422a1a4-490a-4905-9093-fd39409c55b7-etcd-serving-ca\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.144698 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhd65\" (UniqueName: \"kubernetes.io/projected/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-kube-api-access-jhd65\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.144717 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-service-ca\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.144746 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6zjc\" (UniqueName: \"kubernetes.io/projected/7be36b21-cbe7-4374-a796-8974ac58d8ed-kube-api-access-m6zjc\") pod \"machine-approver-56656f9798-z7zfr\" (UID: \"7be36b21-cbe7-4374-a796-8974ac58d8ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7zfr" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.144766 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnkps\" (UniqueName: \"kubernetes.io/projected/1b53c885-35c8-44b5-87d8-be092079dd0d-kube-api-access-fnkps\") pod \"cluster-image-registry-operator-dc59b4c8b-56z88\" (UID: \"1b53c885-35c8-44b5-87d8-be092079dd0d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.144781 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.144800 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c4867dee-6837-4bb3-b7c4-18d9842a005a-encryption-config\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.144827 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4867dee-6837-4bb3-b7c4-18d9842a005a-audit-dir\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.144855 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/581a142b-da2e-47b5-a96e-bdf19302b1f9-proxy-tls\") pod \"machine-config-controller-84d6567774-n9444\" (UID: \"581a142b-da2e-47b5-a96e-bdf19302b1f9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n9444" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.144878 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3168738e-e0e3-43d7-bae7-79276263bb8e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bbtnp\" (UID: \"3168738e-e0e3-43d7-bae7-79276263bb8e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bbtnp" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.144897 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw5zk\" (UniqueName: \"kubernetes.io/projected/3168738e-e0e3-43d7-bae7-79276263bb8e-kube-api-access-mw5zk\") pod \"control-plane-machine-set-operator-78cbb6b69f-bbtnp\" (UID: \"3168738e-e0e3-43d7-bae7-79276263bb8e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bbtnp" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.144916 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/863e1fad-048e-4104-aa38-ca05ffec260a-client-ca\") pod \"route-controller-manager-6576b87f9c-ds4g7\" (UID: \"863e1fad-048e-4104-aa38-ca05ffec260a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.144935 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b53c885-35c8-44b5-87d8-be092079dd0d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-56z88\" (UID: \"1b53c885-35c8-44b5-87d8-be092079dd0d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.144951 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b53c885-35c8-44b5-87d8-be092079dd0d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-56z88\" (UID: \"1b53c885-35c8-44b5-87d8-be092079dd0d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.144967 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/066984c2-78f9-456c-b263-5f108f7be481-serving-cert\") pod \"authentication-operator-69f744f599-ffdpb\" (UID: \"066984c2-78f9-456c-b263-5f108f7be481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.144981 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145006 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hn6p\" (UniqueName: \"kubernetes.io/projected/863e1fad-048e-4104-aa38-ca05ffec260a-kube-api-access-2hn6p\") pod \"route-controller-manager-6576b87f9c-ds4g7\" (UID: \"863e1fad-048e-4104-aa38-ca05ffec260a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145029 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d422a1a4-490a-4905-9093-fd39409c55b7-encryption-config\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145050 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b2914a6-9bc3-4d47-b951-07cd54c2f8e4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2h4rf\" (UID: \"1b2914a6-9bc3-4d47-b951-07cd54c2f8e4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2h4rf" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145071 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/066984c2-78f9-456c-b263-5f108f7be481-config\") pod \"authentication-operator-69f744f599-ffdpb\" (UID: \"066984c2-78f9-456c-b263-5f108f7be481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145088 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145115 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0a141fd-dece-4e1c-8cb3-794c0b3f6b21-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ss7mm\" (UID: \"d0a141fd-dece-4e1c-8cb3-794c0b3f6b21\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ss7mm" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145133 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/581a142b-da2e-47b5-a96e-bdf19302b1f9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-n9444\" (UID: \"581a142b-da2e-47b5-a96e-bdf19302b1f9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n9444" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145151 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be36b21-cbe7-4374-a796-8974ac58d8ed-config\") pod \"machine-approver-56656f9798-z7zfr\" (UID: \"7be36b21-cbe7-4374-a796-8974ac58d8ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7zfr" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145174 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d59eeb02-0c89-4608-98c6-78a5b88cdd5c-config\") pod \"machine-api-operator-5694c8668f-wf8j9\" (UID: \"d59eeb02-0c89-4608-98c6-78a5b88cdd5c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wf8j9" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145193 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bda434a-853e-4281-9e1b-1d79f81f6856-client-ca\") pod \"controller-manager-879f6c89f-g9qqj\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145214 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxwx2\" (UniqueName: \"kubernetes.io/projected/8bda434a-853e-4281-9e1b-1d79f81f6856-kube-api-access-pxwx2\") pod \"controller-manager-879f6c89f-g9qqj\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145236 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d59eeb02-0c89-4608-98c6-78a5b88cdd5c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wf8j9\" (UID: \"d59eeb02-0c89-4608-98c6-78a5b88cdd5c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wf8j9" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145282 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55a813a9-1cae-4e6f-9f77-c407d0068c92-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-p2t75\" (UID: \"55a813a9-1cae-4e6f-9f77-c407d0068c92\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2t75" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145304 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clxs8\" (UniqueName: \"kubernetes.io/projected/aec15e62-dae9-4706-86f7-32c738b82ead-kube-api-access-clxs8\") pod \"service-ca-9c57cc56f-62bgj\" (UID: \"aec15e62-dae9-4706-86f7-32c738b82ead\") " pod="openshift-service-ca/service-ca-9c57cc56f-62bgj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145327 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b53c885-35c8-44b5-87d8-be092079dd0d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-56z88\" (UID: \"1b53c885-35c8-44b5-87d8-be092079dd0d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145354 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d422a1a4-490a-4905-9093-fd39409c55b7-serving-cert\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145378 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a88c49d5-e615-4c41-972e-3a0ddcadfd53-console-serving-cert\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145401 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-audit-dir\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145419 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145440 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp9jz\" (UniqueName: \"kubernetes.io/projected/d0a141fd-dece-4e1c-8cb3-794c0b3f6b21-kube-api-access-rp9jz\") pod \"cluster-samples-operator-665b6dd947-ss7mm\" (UID: \"d0a141fd-dece-4e1c-8cb3-794c0b3f6b21\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ss7mm" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145475 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bda434a-853e-4281-9e1b-1d79f81f6856-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-g9qqj\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145498 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/066984c2-78f9-456c-b263-5f108f7be481-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ffdpb\" (UID: \"066984c2-78f9-456c-b263-5f108f7be481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145523 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-audit-policies\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145549 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55a813a9-1cae-4e6f-9f77-c407d0068c92-config\") pod \"kube-controller-manager-operator-78b949d7b-p2t75\" (UID: \"55a813a9-1cae-4e6f-9f77-c407d0068c92\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2t75" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145579 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7be36b21-cbe7-4374-a796-8974ac58d8ed-auth-proxy-config\") pod \"machine-approver-56656f9798-z7zfr\" (UID: \"7be36b21-cbe7-4374-a796-8974ac58d8ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7zfr" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145615 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnb98\" (UniqueName: \"kubernetes.io/projected/d422a1a4-490a-4905-9093-fd39409c55b7-kube-api-access-dnb98\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145637 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjkd4\" (UniqueName: \"kubernetes.io/projected/c4867dee-6837-4bb3-b7c4-18d9842a005a-kube-api-access-mjkd4\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145658 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a88c49d5-e615-4c41-972e-3a0ddcadfd53-console-oauth-config\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145684 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chjsk\" (UniqueName: \"kubernetes.io/projected/51aa0e66-0340-4981-8a2a-b5248a6dd4bb-kube-api-access-chjsk\") pod \"package-server-manager-789f6589d5-mvhd8\" (UID: \"51aa0e66-0340-4981-8a2a-b5248a6dd4bb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mvhd8" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145711 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwtlj\" (UniqueName: \"kubernetes.io/projected/1b2914a6-9bc3-4d47-b951-07cd54c2f8e4-kube-api-access-fwtlj\") pod \"multus-admission-controller-857f4d67dd-2h4rf\" (UID: \"1b2914a6-9bc3-4d47-b951-07cd54c2f8e4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2h4rf" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145729 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z2nn6"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145740 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4867dee-6837-4bb3-b7c4-18d9842a005a-audit-policies\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145755 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2h4rf"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145765 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145768 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qztwb\" (UID: \"ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qztwb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145897 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145946 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/30f2f2b2-8cb3-47cf-a066-87ddfdd40201-available-featuregates\") pod \"openshift-config-operator-7777fb866f-f2dl4\" (UID: \"30f2f2b2-8cb3-47cf-a066-87ddfdd40201\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.145989 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gszbc\" (UniqueName: \"kubernetes.io/projected/ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a-kube-api-access-gszbc\") pod \"openshift-apiserver-operator-796bbdcf4f-qztwb\" (UID: \"ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qztwb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146022 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146050 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146092 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863e1fad-048e-4104-aa38-ca05ffec260a-serving-cert\") pod \"route-controller-manager-6576b87f9c-ds4g7\" (UID: \"863e1fad-048e-4104-aa38-ca05ffec260a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146139 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51aa0e66-0340-4981-8a2a-b5248a6dd4bb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mvhd8\" (UID: \"51aa0e66-0340-4981-8a2a-b5248a6dd4bb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mvhd8" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146177 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d59eeb02-0c89-4608-98c6-78a5b88cdd5c-images\") pod \"machine-api-operator-5694c8668f-wf8j9\" (UID: \"d59eeb02-0c89-4608-98c6-78a5b88cdd5c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wf8j9" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146210 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-trusted-ca-bundle\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146238 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d1dd039-603f-4e23-982c-f2661f163a0d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n5hmd\" (UID: \"1d1dd039-603f-4e23-982c-f2661f163a0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5hmd" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146266 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qztwb\" (UID: \"ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qztwb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146287 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-console-config\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146321 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4867dee-6837-4bb3-b7c4-18d9842a005a-etcd-client\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146340 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-oauth-serving-cert\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146360 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55a813a9-1cae-4e6f-9f77-c407d0068c92-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-p2t75\" (UID: \"55a813a9-1cae-4e6f-9f77-c407d0068c92\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2t75" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146377 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c4867dee-6837-4bb3-b7c4-18d9842a005a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146405 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7be36b21-cbe7-4374-a796-8974ac58d8ed-machine-approver-tls\") pod \"machine-approver-56656f9798-z7zfr\" (UID: \"7be36b21-cbe7-4374-a796-8974ac58d8ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7zfr" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146423 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146442 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146462 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bda434a-853e-4281-9e1b-1d79f81f6856-serving-cert\") pod \"controller-manager-879f6c89f-g9qqj\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146481 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/066984c2-78f9-456c-b263-5f108f7be481-service-ca-bundle\") pod \"authentication-operator-69f744f599-ffdpb\" (UID: \"066984c2-78f9-456c-b263-5f108f7be481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146500 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d422a1a4-490a-4905-9093-fd39409c55b7-config\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146520 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d422a1a4-490a-4905-9093-fd39409c55b7-image-import-ca\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146538 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d422a1a4-490a-4905-9093-fd39409c55b7-audit-dir\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146556 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4867dee-6837-4bb3-b7c4-18d9842a005a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146618 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhqjd\" (UniqueName: \"kubernetes.io/projected/a88c49d5-e615-4c41-972e-3a0ddcadfd53-kube-api-access-zhqjd\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146693 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64r5j\" (UniqueName: \"kubernetes.io/projected/581a142b-da2e-47b5-a96e-bdf19302b1f9-kube-api-access-64r5j\") pod \"machine-config-controller-84d6567774-n9444\" (UID: \"581a142b-da2e-47b5-a96e-bdf19302b1f9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n9444" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146701 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qztwb\" (UID: \"ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qztwb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146716 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1dd039-603f-4e23-982c-f2661f163a0d-config\") pod \"kube-apiserver-operator-766d6c64bb-n5hmd\" (UID: \"1d1dd039-603f-4e23-982c-f2661f163a0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5hmd" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146756 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wg85h"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146767 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bda434a-853e-4281-9e1b-1d79f81f6856-config\") pod \"controller-manager-879f6c89f-g9qqj\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146860 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjdjf\" (UniqueName: \"kubernetes.io/projected/066984c2-78f9-456c-b263-5f108f7be481-kube-api-access-hjdjf\") pod \"authentication-operator-69f744f599-ffdpb\" (UID: \"066984c2-78f9-456c-b263-5f108f7be481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146885 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/aec15e62-dae9-4706-86f7-32c738b82ead-signing-cabundle\") pod \"service-ca-9c57cc56f-62bgj\" (UID: \"aec15e62-dae9-4706-86f7-32c738b82ead\") " pod="openshift-service-ca/service-ca-9c57cc56f-62bgj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146898 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/863e1fad-048e-4104-aa38-ca05ffec260a-client-ca\") pod \"route-controller-manager-6576b87f9c-ds4g7\" (UID: \"863e1fad-048e-4104-aa38-ca05ffec260a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146914 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d422a1a4-490a-4905-9093-fd39409c55b7-audit\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146964 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d422a1a4-490a-4905-9093-fd39409c55b7-etcd-client\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146991 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d422a1a4-490a-4905-9093-fd39409c55b7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.147018 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/aec15e62-dae9-4706-86f7-32c738b82ead-signing-key\") pod \"service-ca-9c57cc56f-62bgj\" (UID: \"aec15e62-dae9-4706-86f7-32c738b82ead\") " pod="openshift-service-ca/service-ca-9c57cc56f-62bgj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.147048 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863e1fad-048e-4104-aa38-ca05ffec260a-config\") pod \"route-controller-manager-6576b87f9c-ds4g7\" (UID: \"863e1fad-048e-4104-aa38-ca05ffec260a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.147074 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlrgl\" (UniqueName: \"kubernetes.io/projected/d59eeb02-0c89-4608-98c6-78a5b88cdd5c-kube-api-access-wlrgl\") pod \"machine-api-operator-5694c8668f-wf8j9\" (UID: \"d59eeb02-0c89-4608-98c6-78a5b88cdd5c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wf8j9" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.147132 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.147159 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d422a1a4-490a-4905-9093-fd39409c55b7-node-pullsecrets\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.147186 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d1dd039-603f-4e23-982c-f2661f163a0d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n5hmd\" (UID: \"1d1dd039-603f-4e23-982c-f2661f163a0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5hmd" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.147223 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30f2f2b2-8cb3-47cf-a066-87ddfdd40201-serving-cert\") pod \"openshift-config-operator-7777fb866f-f2dl4\" (UID: \"30f2f2b2-8cb3-47cf-a066-87ddfdd40201\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.147284 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvlbg\" (UniqueName: \"kubernetes.io/projected/30f2f2b2-8cb3-47cf-a066-87ddfdd40201-kube-api-access-nvlbg\") pod \"openshift-config-operator-7777fb866f-f2dl4\" (UID: \"30f2f2b2-8cb3-47cf-a066-87ddfdd40201\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.147306 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4867dee-6837-4bb3-b7c4-18d9842a005a-serving-cert\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.147673 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d422a1a4-490a-4905-9093-fd39409c55b7-audit\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.147879 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.148183 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bda434a-853e-4281-9e1b-1d79f81f6856-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-g9qqj\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.148290 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7be36b21-cbe7-4374-a796-8974ac58d8ed-config\") pod \"machine-approver-56656f9798-z7zfr\" (UID: \"7be36b21-cbe7-4374-a796-8974ac58d8ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7zfr" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.148309 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bda434a-853e-4281-9e1b-1d79f81f6856-config\") pod \"controller-manager-879f6c89f-g9qqj\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.149075 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bda434a-853e-4281-9e1b-1d79f81f6856-client-ca\") pod \"controller-manager-879f6c89f-g9qqj\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.149108 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/066984c2-78f9-456c-b263-5f108f7be481-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ffdpb\" (UID: \"066984c2-78f9-456c-b263-5f108f7be481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.149130 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d59eeb02-0c89-4608-98c6-78a5b88cdd5c-config\") pod \"machine-api-operator-5694c8668f-wf8j9\" (UID: \"d59eeb02-0c89-4608-98c6-78a5b88cdd5c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wf8j9" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.149471 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-audit-policies\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.150161 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7be36b21-cbe7-4374-a796-8974ac58d8ed-auth-proxy-config\") pod \"machine-approver-56656f9798-z7zfr\" (UID: \"7be36b21-cbe7-4374-a796-8974ac58d8ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7zfr" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.150868 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863e1fad-048e-4104-aa38-ca05ffec260a-config\") pod \"route-controller-manager-6576b87f9c-ds4g7\" (UID: \"863e1fad-048e-4104-aa38-ca05ffec260a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.151536 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4867dee-6837-4bb3-b7c4-18d9842a005a-audit-dir\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.151663 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d422a1a4-490a-4905-9093-fd39409c55b7-trusted-ca-bundle\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.152059 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c4867dee-6837-4bb3-b7c4-18d9842a005a-audit-policies\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.152140 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d422a1a4-490a-4905-9093-fd39409c55b7-etcd-serving-ca\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.152142 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b53c885-35c8-44b5-87d8-be092079dd0d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-56z88\" (UID: \"1b53c885-35c8-44b5-87d8-be092079dd0d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.152562 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kj2c4"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.152618 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r7hgr"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.152632 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8d4jd"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.153285 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.154181 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d422a1a4-490a-4905-9093-fd39409c55b7-audit-dir\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.154708 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4867dee-6837-4bb3-b7c4-18d9842a005a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.154725 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-oauth-serving-cert\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.146711 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/30f2f2b2-8cb3-47cf-a066-87ddfdd40201-available-featuregates\") pod \"openshift-config-operator-7777fb866f-f2dl4\" (UID: \"30f2f2b2-8cb3-47cf-a066-87ddfdd40201\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.154947 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4867dee-6837-4bb3-b7c4-18d9842a005a-serving-cert\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.155082 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d422a1a4-490a-4905-9093-fd39409c55b7-node-pullsecrets\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.155145 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d422a1a4-490a-4905-9093-fd39409c55b7-image-import-ca\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.155758 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-n9444"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.155802 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.155765 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/066984c2-78f9-456c-b263-5f108f7be481-config\") pod \"authentication-operator-69f744f599-ffdpb\" (UID: \"066984c2-78f9-456c-b263-5f108f7be481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.156041 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d59eeb02-0c89-4608-98c6-78a5b88cdd5c-images\") pod \"machine-api-operator-5694c8668f-wf8j9\" (UID: \"d59eeb02-0c89-4608-98c6-78a5b88cdd5c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wf8j9" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.156041 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-console-config\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.156128 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.156390 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c4867dee-6837-4bb3-b7c4-18d9842a005a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.156921 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-x2zhr"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.156993 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-trusted-ca-bundle\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.157101 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/066984c2-78f9-456c-b263-5f108f7be481-service-ca-bundle\") pod \"authentication-operator-69f744f599-ffdpb\" (UID: \"066984c2-78f9-456c-b263-5f108f7be481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.157407 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d422a1a4-490a-4905-9093-fd39409c55b7-encryption-config\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.157532 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d422a1a4-490a-4905-9093-fd39409c55b7-config\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.158272 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d422a1a4-490a-4905-9093-fd39409c55b7-etcd-client\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.158365 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-audit-dir\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.158930 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-service-ca\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.159032 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0a141fd-dece-4e1c-8cb3-794c0b3f6b21-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ss7mm\" (UID: \"d0a141fd-dece-4e1c-8cb3-794c0b3f6b21\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ss7mm" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.159063 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a88c49d5-e615-4c41-972e-3a0ddcadfd53-console-oauth-config\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.159532 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863e1fad-048e-4104-aa38-ca05ffec260a-serving-cert\") pod \"route-controller-manager-6576b87f9c-ds4g7\" (UID: \"863e1fad-048e-4104-aa38-ca05ffec260a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.159660 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bbtnp"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.160321 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bda434a-853e-4281-9e1b-1d79f81f6856-serving-cert\") pod \"controller-manager-879f6c89f-g9qqj\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.160749 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-vvxbh"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.161140 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.161485 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a88c49d5-e615-4c41-972e-3a0ddcadfd53-console-serving-cert\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.161576 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30f2f2b2-8cb3-47cf-a066-87ddfdd40201-serving-cert\") pod \"openshift-config-operator-7777fb866f-f2dl4\" (UID: \"30f2f2b2-8cb3-47cf-a066-87ddfdd40201\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.161830 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.161865 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vvxbh"] Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.161844 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vvxbh" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.162583 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qztwb\" (UID: \"ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qztwb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.162847 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4867dee-6837-4bb3-b7c4-18d9842a005a-etcd-client\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.162862 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.162951 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/066984c2-78f9-456c-b263-5f108f7be481-serving-cert\") pod \"authentication-operator-69f744f599-ffdpb\" (UID: \"066984c2-78f9-456c-b263-5f108f7be481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.162954 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.163197 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.163625 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.163943 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c4867dee-6837-4bb3-b7c4-18d9842a005a-encryption-config\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.164006 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7be36b21-cbe7-4374-a796-8974ac58d8ed-machine-approver-tls\") pod \"machine-approver-56656f9798-z7zfr\" (UID: \"7be36b21-cbe7-4374-a796-8974ac58d8ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7zfr" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.164134 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b53c885-35c8-44b5-87d8-be092079dd0d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-56z88\" (UID: \"1b53c885-35c8-44b5-87d8-be092079dd0d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.164317 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d59eeb02-0c89-4608-98c6-78a5b88cdd5c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-wf8j9\" (UID: \"d59eeb02-0c89-4608-98c6-78a5b88cdd5c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wf8j9" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.164696 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.164709 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.166163 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.173053 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d422a1a4-490a-4905-9093-fd39409c55b7-serving-cert\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.186881 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.207154 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.226082 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.246001 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.257283 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/581a142b-da2e-47b5-a96e-bdf19302b1f9-proxy-tls\") pod \"machine-config-controller-84d6567774-n9444\" (UID: \"581a142b-da2e-47b5-a96e-bdf19302b1f9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n9444" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.257348 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3168738e-e0e3-43d7-bae7-79276263bb8e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bbtnp\" (UID: \"3168738e-e0e3-43d7-bae7-79276263bb8e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bbtnp" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.257387 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw5zk\" (UniqueName: \"kubernetes.io/projected/3168738e-e0e3-43d7-bae7-79276263bb8e-kube-api-access-mw5zk\") pod \"control-plane-machine-set-operator-78cbb6b69f-bbtnp\" (UID: \"3168738e-e0e3-43d7-bae7-79276263bb8e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bbtnp" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.257427 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b2914a6-9bc3-4d47-b951-07cd54c2f8e4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2h4rf\" (UID: \"1b2914a6-9bc3-4d47-b951-07cd54c2f8e4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2h4rf" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.257467 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/581a142b-da2e-47b5-a96e-bdf19302b1f9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-n9444\" (UID: \"581a142b-da2e-47b5-a96e-bdf19302b1f9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n9444" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.257630 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55a813a9-1cae-4e6f-9f77-c407d0068c92-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-p2t75\" (UID: \"55a813a9-1cae-4e6f-9f77-c407d0068c92\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2t75" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.257698 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clxs8\" (UniqueName: \"kubernetes.io/projected/aec15e62-dae9-4706-86f7-32c738b82ead-kube-api-access-clxs8\") pod \"service-ca-9c57cc56f-62bgj\" (UID: \"aec15e62-dae9-4706-86f7-32c738b82ead\") " pod="openshift-service-ca/service-ca-9c57cc56f-62bgj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.258190 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55a813a9-1cae-4e6f-9f77-c407d0068c92-config\") pod \"kube-controller-manager-operator-78b949d7b-p2t75\" (UID: \"55a813a9-1cae-4e6f-9f77-c407d0068c92\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2t75" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.258278 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chjsk\" (UniqueName: \"kubernetes.io/projected/51aa0e66-0340-4981-8a2a-b5248a6dd4bb-kube-api-access-chjsk\") pod \"package-server-manager-789f6589d5-mvhd8\" (UID: \"51aa0e66-0340-4981-8a2a-b5248a6dd4bb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mvhd8" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.258420 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwtlj\" (UniqueName: \"kubernetes.io/projected/1b2914a6-9bc3-4d47-b951-07cd54c2f8e4-kube-api-access-fwtlj\") pod \"multus-admission-controller-857f4d67dd-2h4rf\" (UID: \"1b2914a6-9bc3-4d47-b951-07cd54c2f8e4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2h4rf" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.258564 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51aa0e66-0340-4981-8a2a-b5248a6dd4bb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mvhd8\" (UID: \"51aa0e66-0340-4981-8a2a-b5248a6dd4bb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mvhd8" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.258666 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d1dd039-603f-4e23-982c-f2661f163a0d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n5hmd\" (UID: \"1d1dd039-603f-4e23-982c-f2661f163a0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5hmd" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.258963 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55a813a9-1cae-4e6f-9f77-c407d0068c92-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-p2t75\" (UID: \"55a813a9-1cae-4e6f-9f77-c407d0068c92\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2t75" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.259038 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64r5j\" (UniqueName: \"kubernetes.io/projected/581a142b-da2e-47b5-a96e-bdf19302b1f9-kube-api-access-64r5j\") pod \"machine-config-controller-84d6567774-n9444\" (UID: \"581a142b-da2e-47b5-a96e-bdf19302b1f9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n9444" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.259070 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1dd039-603f-4e23-982c-f2661f163a0d-config\") pod \"kube-apiserver-operator-766d6c64bb-n5hmd\" (UID: \"1d1dd039-603f-4e23-982c-f2661f163a0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5hmd" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.259294 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/aec15e62-dae9-4706-86f7-32c738b82ead-signing-cabundle\") pod \"service-ca-9c57cc56f-62bgj\" (UID: \"aec15e62-dae9-4706-86f7-32c738b82ead\") " pod="openshift-service-ca/service-ca-9c57cc56f-62bgj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.259400 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/aec15e62-dae9-4706-86f7-32c738b82ead-signing-key\") pod \"service-ca-9c57cc56f-62bgj\" (UID: \"aec15e62-dae9-4706-86f7-32c738b82ead\") " pod="openshift-service-ca/service-ca-9c57cc56f-62bgj" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.259467 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d1dd039-603f-4e23-982c-f2661f163a0d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n5hmd\" (UID: \"1d1dd039-603f-4e23-982c-f2661f163a0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5hmd" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.259607 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/581a142b-da2e-47b5-a96e-bdf19302b1f9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-n9444\" (UID: \"581a142b-da2e-47b5-a96e-bdf19302b1f9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n9444" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.266200 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.275564 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d1dd039-603f-4e23-982c-f2661f163a0d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n5hmd\" (UID: \"1d1dd039-603f-4e23-982c-f2661f163a0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5hmd" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.286310 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.306009 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.309909 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1dd039-603f-4e23-982c-f2661f163a0d-config\") pod \"kube-apiserver-operator-766d6c64bb-n5hmd\" (UID: \"1d1dd039-603f-4e23-982c-f2661f163a0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5hmd" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.326626 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.345827 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.366450 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.386134 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.415438 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.425889 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.446256 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.466459 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.486130 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.507471 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.526297 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.527027 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.532331 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55a813a9-1cae-4e6f-9f77-c407d0068c92-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-p2t75\" (UID: \"55a813a9-1cae-4e6f-9f77-c407d0068c92\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2t75" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.548210 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.567145 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.570040 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55a813a9-1cae-4e6f-9f77-c407d0068c92-config\") pod \"kube-controller-manager-operator-78b949d7b-p2t75\" (UID: \"55a813a9-1cae-4e6f-9f77-c407d0068c92\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2t75" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.586897 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.608257 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.627648 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.646714 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.665966 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.686991 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.708329 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.727185 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.747621 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.768151 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.787118 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.807846 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.827126 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.847317 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.866307 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.887125 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.907328 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.927471 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.947034 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.966410 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.975436 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/51aa0e66-0340-4981-8a2a-b5248a6dd4bb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mvhd8\" (UID: \"51aa0e66-0340-4981-8a2a-b5248a6dd4bb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mvhd8" Jan 23 06:35:37 crc kubenswrapper[4937]: I0123 06:35:37.987698 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.006486 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.011663 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1b2914a6-9bc3-4d47-b951-07cd54c2f8e4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2h4rf\" (UID: \"1b2914a6-9bc3-4d47-b951-07cd54c2f8e4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2h4rf" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.026840 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.046865 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.064524 4937 request.go:700] Waited for 1.011777802s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&limit=500&resourceVersion=0 Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.066491 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.085907 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.107443 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.128032 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.141982 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3168738e-e0e3-43d7-bae7-79276263bb8e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bbtnp\" (UID: \"3168738e-e0e3-43d7-bae7-79276263bb8e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bbtnp" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.146539 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.166320 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.187119 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.205926 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.226158 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.232015 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/aec15e62-dae9-4706-86f7-32c738b82ead-signing-cabundle\") pod \"service-ca-9c57cc56f-62bgj\" (UID: \"aec15e62-dae9-4706-86f7-32c738b82ead\") " pod="openshift-service-ca/service-ca-9c57cc56f-62bgj" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.245994 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 06:35:38 crc kubenswrapper[4937]: E0123 06:35:38.258416 4937 secret.go:188] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 23 06:35:38 crc kubenswrapper[4937]: E0123 06:35:38.258628 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/581a142b-da2e-47b5-a96e-bdf19302b1f9-proxy-tls podName:581a142b-da2e-47b5-a96e-bdf19302b1f9 nodeName:}" failed. No retries permitted until 2026-01-23 06:35:38.758483827 +0000 UTC m=+138.562250510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/581a142b-da2e-47b5-a96e-bdf19302b1f9-proxy-tls") pod "machine-config-controller-84d6567774-n9444" (UID: "581a142b-da2e-47b5-a96e-bdf19302b1f9") : failed to sync secret cache: timed out waiting for the condition Jan 23 06:35:38 crc kubenswrapper[4937]: E0123 06:35:38.260747 4937 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 23 06:35:38 crc kubenswrapper[4937]: E0123 06:35:38.260907 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aec15e62-dae9-4706-86f7-32c738b82ead-signing-key podName:aec15e62-dae9-4706-86f7-32c738b82ead nodeName:}" failed. No retries permitted until 2026-01-23 06:35:38.760877666 +0000 UTC m=+138.564644329 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/aec15e62-dae9-4706-86f7-32c738b82ead-signing-key") pod "service-ca-9c57cc56f-62bgj" (UID: "aec15e62-dae9-4706-86f7-32c738b82ead") : failed to sync secret cache: timed out waiting for the condition Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.267489 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.286999 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.305796 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.326717 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.373285 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.387076 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.406721 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.426919 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.446838 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.467265 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.487425 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.507170 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.539907 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.547288 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.567511 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.586478 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.608528 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.627210 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.646900 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.666762 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.706714 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.728334 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.747131 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.767959 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.783930 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/581a142b-da2e-47b5-a96e-bdf19302b1f9-proxy-tls\") pod \"machine-config-controller-84d6567774-n9444\" (UID: \"581a142b-da2e-47b5-a96e-bdf19302b1f9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n9444" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.784126 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/aec15e62-dae9-4706-86f7-32c738b82ead-signing-key\") pod \"service-ca-9c57cc56f-62bgj\" (UID: \"aec15e62-dae9-4706-86f7-32c738b82ead\") " pod="openshift-service-ca/service-ca-9c57cc56f-62bgj" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.786073 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.788211 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/aec15e62-dae9-4706-86f7-32c738b82ead-signing-key\") pod \"service-ca-9c57cc56f-62bgj\" (UID: \"aec15e62-dae9-4706-86f7-32c738b82ead\") " pod="openshift-service-ca/service-ca-9c57cc56f-62bgj" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.789395 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/581a142b-da2e-47b5-a96e-bdf19302b1f9-proxy-tls\") pod \"machine-config-controller-84d6567774-n9444\" (UID: \"581a142b-da2e-47b5-a96e-bdf19302b1f9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n9444" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.807561 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.827786 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.847534 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.866503 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.887081 4937 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.934325 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5swlc\" (UniqueName: \"kubernetes.io/projected/57c396e4-f445-46c6-b636-cfe96d07cc43-kube-api-access-5swlc\") pod \"downloads-7954f5f757-6g59k\" (UID: \"57c396e4-f445-46c6-b636-cfe96d07cc43\") " pod="openshift-console/downloads-7954f5f757-6g59k" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.954425 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp9jz\" (UniqueName: \"kubernetes.io/projected/d0a141fd-dece-4e1c-8cb3-794c0b3f6b21-kube-api-access-rp9jz\") pod \"cluster-samples-operator-665b6dd947-ss7mm\" (UID: \"d0a141fd-dece-4e1c-8cb3-794c0b3f6b21\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ss7mm" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.974380 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjdjf\" (UniqueName: \"kubernetes.io/projected/066984c2-78f9-456c-b263-5f108f7be481-kube-api-access-hjdjf\") pod \"authentication-operator-69f744f599-ffdpb\" (UID: \"066984c2-78f9-456c-b263-5f108f7be481\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" Jan 23 06:35:38 crc kubenswrapper[4937]: I0123 06:35:38.981717 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjkd4\" (UniqueName: \"kubernetes.io/projected/c4867dee-6837-4bb3-b7c4-18d9842a005a-kube-api-access-mjkd4\") pod \"apiserver-7bbb656c7d-g8lnx\" (UID: \"c4867dee-6837-4bb3-b7c4-18d9842a005a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.002513 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlrgl\" (UniqueName: \"kubernetes.io/projected/d59eeb02-0c89-4608-98c6-78a5b88cdd5c-kube-api-access-wlrgl\") pod \"machine-api-operator-5694c8668f-wf8j9\" (UID: \"d59eeb02-0c89-4608-98c6-78a5b88cdd5c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-wf8j9" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.021534 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxwx2\" (UniqueName: \"kubernetes.io/projected/8bda434a-853e-4281-9e1b-1d79f81f6856-kube-api-access-pxwx2\") pod \"controller-manager-879f6c89f-g9qqj\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.021643 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-wf8j9" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.051037 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnb98\" (UniqueName: \"kubernetes.io/projected/d422a1a4-490a-4905-9093-fd39409c55b7-kube-api-access-dnb98\") pod \"apiserver-76f77b778f-c9fhv\" (UID: \"d422a1a4-490a-4905-9093-fd39409c55b7\") " pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.065121 4937 request.go:700] Waited for 1.91364555s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.067401 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gszbc\" (UniqueName: \"kubernetes.io/projected/ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a-kube-api-access-gszbc\") pod \"openshift-apiserver-operator-796bbdcf4f-qztwb\" (UID: \"ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qztwb" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.069896 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ss7mm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.072055 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.087668 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b53c885-35c8-44b5-87d8-be092079dd0d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-56z88\" (UID: \"1b53c885-35c8-44b5-87d8-be092079dd0d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.106889 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.109058 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhqjd\" (UniqueName: \"kubernetes.io/projected/a88c49d5-e615-4c41-972e-3a0ddcadfd53-kube-api-access-zhqjd\") pod \"console-f9d7485db-6n269\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.132433 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hn6p\" (UniqueName: \"kubernetes.io/projected/863e1fad-048e-4104-aa38-ca05ffec260a-kube-api-access-2hn6p\") pod \"route-controller-manager-6576b87f9c-ds4g7\" (UID: \"863e1fad-048e-4104-aa38-ca05ffec260a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.153578 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.169051 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhd65\" (UniqueName: \"kubernetes.io/projected/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-kube-api-access-jhd65\") pod \"oauth-openshift-558db77b4-rsth7\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.173918 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvlbg\" (UniqueName: \"kubernetes.io/projected/30f2f2b2-8cb3-47cf-a066-87ddfdd40201-kube-api-access-nvlbg\") pod \"openshift-config-operator-7777fb866f-f2dl4\" (UID: \"30f2f2b2-8cb3-47cf-a066-87ddfdd40201\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.177125 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6g59k" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.189213 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.191015 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.208271 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.210335 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.217480 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.225518 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.227014 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.229188 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnkps\" (UniqueName: \"kubernetes.io/projected/1b53c885-35c8-44b5-87d8-be092079dd0d-kube-api-access-fnkps\") pod \"cluster-image-registry-operator-dc59b4c8b-56z88\" (UID: \"1b53c885-35c8-44b5-87d8-be092079dd0d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.238690 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qztwb" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.242095 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.249170 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.268886 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6zjc\" (UniqueName: \"kubernetes.io/projected/7be36b21-cbe7-4374-a796-8974ac58d8ed-kube-api-access-m6zjc\") pod \"machine-approver-56656f9798-z7zfr\" (UID: \"7be36b21-cbe7-4374-a796-8974ac58d8ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7zfr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.282429 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw5zk\" (UniqueName: \"kubernetes.io/projected/3168738e-e0e3-43d7-bae7-79276263bb8e-kube-api-access-mw5zk\") pod \"control-plane-machine-set-operator-78cbb6b69f-bbtnp\" (UID: \"3168738e-e0e3-43d7-bae7-79276263bb8e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bbtnp" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.301119 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clxs8\" (UniqueName: \"kubernetes.io/projected/aec15e62-dae9-4706-86f7-32c738b82ead-kube-api-access-clxs8\") pod \"service-ca-9c57cc56f-62bgj\" (UID: \"aec15e62-dae9-4706-86f7-32c738b82ead\") " pod="openshift-service-ca/service-ca-9c57cc56f-62bgj" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.323518 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chjsk\" (UniqueName: \"kubernetes.io/projected/51aa0e66-0340-4981-8a2a-b5248a6dd4bb-kube-api-access-chjsk\") pod \"package-server-manager-789f6589d5-mvhd8\" (UID: \"51aa0e66-0340-4981-8a2a-b5248a6dd4bb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mvhd8" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.345446 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwtlj\" (UniqueName: \"kubernetes.io/projected/1b2914a6-9bc3-4d47-b951-07cd54c2f8e4-kube-api-access-fwtlj\") pod \"multus-admission-controller-857f4d67dd-2h4rf\" (UID: \"1b2914a6-9bc3-4d47-b951-07cd54c2f8e4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2h4rf" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.363321 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d1dd039-603f-4e23-982c-f2661f163a0d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n5hmd\" (UID: \"1d1dd039-603f-4e23-982c-f2661f163a0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5hmd" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.370680 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mvhd8" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.374908 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2h4rf" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.382801 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55a813a9-1cae-4e6f-9f77-c407d0068c92-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-p2t75\" (UID: \"55a813a9-1cae-4e6f-9f77-c407d0068c92\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2t75" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.395470 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bbtnp" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.407061 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.408770 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64r5j\" (UniqueName: \"kubernetes.io/projected/581a142b-da2e-47b5-a96e-bdf19302b1f9-kube-api-access-64r5j\") pod \"machine-config-controller-84d6567774-n9444\" (UID: \"581a142b-da2e-47b5-a96e-bdf19302b1f9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n9444" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.416036 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-62bgj" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.426878 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.457059 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n9444" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.466373 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7zfr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.497854 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqjdj\" (UniqueName: \"kubernetes.io/projected/4059e769-c52c-44c3-88c0-35ab870cedf8-kube-api-access-dqjdj\") pod \"router-default-5444994796-62pqr\" (UID: \"4059e769-c52c-44c3-88c0-35ab870cedf8\") " pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.497901 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/297bbd69-aad2-415f-8b99-9035016c99b9-profile-collector-cert\") pod \"catalog-operator-68c6474976-b6qxt\" (UID: \"297bbd69-aad2-415f-8b99-9035016c99b9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.497929 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n4bd\" (UniqueName: \"kubernetes.io/projected/35bbdc02-0a04-4a8f-a8a2-d7586dc04036-kube-api-access-5n4bd\") pod \"kube-storage-version-migrator-operator-b67b599dd-z6bsb\" (UID: \"35bbdc02-0a04-4a8f-a8a2-d7586dc04036\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6bsb" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.497988 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dda6ba11-61ab-4501-afa3-5bb654f352ea-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7xmf5\" (UID: \"dda6ba11-61ab-4501-afa3-5bb654f352ea\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498012 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7w5x\" (UniqueName: \"kubernetes.io/projected/46ddd3f1-b28d-4390-80f5-92990c25a964-kube-api-access-n7w5x\") pod \"openshift-controller-manager-operator-756b6f6bc6-r7hgr\" (UID: \"46ddd3f1-b28d-4390-80f5-92990c25a964\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r7hgr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498047 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f613b04d-1d2f-4d5e-a958-308811ae4ff9-config\") pod \"console-operator-58897d9998-vn8kx\" (UID: \"f613b04d-1d2f-4d5e-a958-308811ae4ff9\") " pod="openshift-console-operator/console-operator-58897d9998-vn8kx" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498068 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c6a1b9ab-e89c-4867-8b96-775cf42abbe3-srv-cert\") pod \"olm-operator-6b444d44fb-bzkz5\" (UID: \"c6a1b9ab-e89c-4867-8b96-775cf42abbe3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzkz5" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498164 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f106b4d2-ffb2-4cdb-bb19-3f107ac274af-trusted-ca\") pod \"ingress-operator-5b745b69d9-vd4ks\" (UID: \"f106b4d2-ffb2-4cdb-bb19-3f107ac274af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498185 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c27qn\" (UniqueName: \"kubernetes.io/projected/f106b4d2-ffb2-4cdb-bb19-3f107ac274af-kube-api-access-c27qn\") pod \"ingress-operator-5b745b69d9-vd4ks\" (UID: \"f106b4d2-ffb2-4cdb-bb19-3f107ac274af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498206 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f613b04d-1d2f-4d5e-a958-308811ae4ff9-serving-cert\") pod \"console-operator-58897d9998-vn8kx\" (UID: \"f613b04d-1d2f-4d5e-a958-308811ae4ff9\") " pod="openshift-console-operator/console-operator-58897d9998-vn8kx" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498250 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498278 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87779486-8ef2-4883-905c-efd17bbfd5ce-metrics-tls\") pod \"dns-operator-744455d44c-dvjv9\" (UID: \"87779486-8ef2-4883-905c-efd17bbfd5ce\") " pod="openshift-dns-operator/dns-operator-744455d44c-dvjv9" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498305 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgc95\" (UniqueName: \"kubernetes.io/projected/dda6ba11-61ab-4501-afa3-5bb654f352ea-kube-api-access-qgc95\") pod \"marketplace-operator-79b997595-7xmf5\" (UID: \"dda6ba11-61ab-4501-afa3-5bb654f352ea\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498328 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e1bc8a7-4de4-4e7b-8771-c355e67a8279-images\") pod \"machine-config-operator-74547568cd-2k8t2\" (UID: \"0e1bc8a7-4de4-4e7b-8771-c355e67a8279\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498350 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6613fb7b-17c3-422f-bb49-6c960b765e63-etcd-service-ca\") pod \"etcd-operator-b45778765-kj2c4\" (UID: \"6613fb7b-17c3-422f-bb49-6c960b765e63\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498370 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmvtj\" (UniqueName: \"kubernetes.io/projected/d2d158c3-bd29-4c22-97fd-be2db4d77d86-kube-api-access-nmvtj\") pod \"service-ca-operator-777779d784-wg85h\" (UID: \"d2d158c3-bd29-4c22-97fd-be2db4d77d86\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wg85h" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498392 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-registry-tls\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498436 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6613fb7b-17c3-422f-bb49-6c960b765e63-etcd-client\") pod \"etcd-operator-b45778765-kj2c4\" (UID: \"6613fb7b-17c3-422f-bb49-6c960b765e63\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498457 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/03d93fa6-01e1-461e-b5d7-c331afb70cf6-webhook-cert\") pod \"packageserver-d55dfcdfc-hplkr\" (UID: \"03d93fa6-01e1-461e-b5d7-c331afb70cf6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498482 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-registry-certificates\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498516 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pqzp\" (UniqueName: \"kubernetes.io/projected/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-kube-api-access-7pqzp\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498537 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e1bc8a7-4de4-4e7b-8771-c355e67a8279-proxy-tls\") pod \"machine-config-operator-74547568cd-2k8t2\" (UID: \"0e1bc8a7-4de4-4e7b-8771-c355e67a8279\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498570 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr8d4\" (UniqueName: \"kubernetes.io/projected/0e1bc8a7-4de4-4e7b-8771-c355e67a8279-kube-api-access-vr8d4\") pod \"machine-config-operator-74547568cd-2k8t2\" (UID: \"0e1bc8a7-4de4-4e7b-8771-c355e67a8279\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498626 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f106b4d2-ffb2-4cdb-bb19-3f107ac274af-metrics-tls\") pod \"ingress-operator-5b745b69d9-vd4ks\" (UID: \"f106b4d2-ffb2-4cdb-bb19-3f107ac274af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498664 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecae0951-0752-4c13-98ab-6fa8f4f86c33-secret-volume\") pod \"collect-profiles-29485830-txg4v\" (UID: \"ecae0951-0752-4c13-98ab-6fa8f4f86c33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498684 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxvdx\" (UniqueName: \"kubernetes.io/projected/297bbd69-aad2-415f-8b99-9035016c99b9-kube-api-access-gxvdx\") pod \"catalog-operator-68c6474976-b6qxt\" (UID: \"297bbd69-aad2-415f-8b99-9035016c99b9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498705 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6613fb7b-17c3-422f-bb49-6c960b765e63-config\") pod \"etcd-operator-b45778765-kj2c4\" (UID: \"6613fb7b-17c3-422f-bb49-6c960b765e63\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498735 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f106b4d2-ffb2-4cdb-bb19-3f107ac274af-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vd4ks\" (UID: \"f106b4d2-ffb2-4cdb-bb19-3f107ac274af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498771 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm77v\" (UniqueName: \"kubernetes.io/projected/ecae0951-0752-4c13-98ab-6fa8f4f86c33-kube-api-access-jm77v\") pod \"collect-profiles-29485830-txg4v\" (UID: \"ecae0951-0752-4c13-98ab-6fa8f4f86c33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498815 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rng4c\" (UniqueName: \"kubernetes.io/projected/b43b4dab-66a6-47e9-bb22-5cc338833a5e-kube-api-access-rng4c\") pod \"migrator-59844c95c7-x2zhr\" (UID: \"b43b4dab-66a6-47e9-bb22-5cc338833a5e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x2zhr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498836 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vggl4\" (UniqueName: \"kubernetes.io/projected/03d93fa6-01e1-461e-b5d7-c331afb70cf6-kube-api-access-vggl4\") pod \"packageserver-d55dfcdfc-hplkr\" (UID: \"03d93fa6-01e1-461e-b5d7-c331afb70cf6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498856 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35bbdc02-0a04-4a8f-a8a2-d7586dc04036-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z6bsb\" (UID: \"35bbdc02-0a04-4a8f-a8a2-d7586dc04036\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6bsb" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498901 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4059e769-c52c-44c3-88c0-35ab870cedf8-stats-auth\") pod \"router-default-5444994796-62pqr\" (UID: \"4059e769-c52c-44c3-88c0-35ab870cedf8\") " pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498924 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498944 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecae0951-0752-4c13-98ab-6fa8f4f86c33-config-volume\") pod \"collect-profiles-29485830-txg4v\" (UID: \"ecae0951-0752-4c13-98ab-6fa8f4f86c33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498967 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pb57\" (UniqueName: \"kubernetes.io/projected/6613fb7b-17c3-422f-bb49-6c960b765e63-kube-api-access-9pb57\") pod \"etcd-operator-b45778765-kj2c4\" (UID: \"6613fb7b-17c3-422f-bb49-6c960b765e63\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.498988 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c6a1b9ab-e89c-4867-8b96-775cf42abbe3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bzkz5\" (UID: \"c6a1b9ab-e89c-4867-8b96-775cf42abbe3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzkz5" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499024 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf5a5a06-3af5-4695-a8b9-71d07ba3470b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z2nn6\" (UID: \"cf5a5a06-3af5-4695-a8b9-71d07ba3470b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z2nn6" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499059 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4059e769-c52c-44c3-88c0-35ab870cedf8-service-ca-bundle\") pod \"router-default-5444994796-62pqr\" (UID: \"4059e769-c52c-44c3-88c0-35ab870cedf8\") " pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499081 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-trusted-ca\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499099 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6613fb7b-17c3-422f-bb49-6c960b765e63-etcd-ca\") pod \"etcd-operator-b45778765-kj2c4\" (UID: \"6613fb7b-17c3-422f-bb49-6c960b765e63\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499119 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/297bbd69-aad2-415f-8b99-9035016c99b9-srv-cert\") pod \"catalog-operator-68c6474976-b6qxt\" (UID: \"297bbd69-aad2-415f-8b99-9035016c99b9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499143 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpj7g\" (UniqueName: \"kubernetes.io/projected/f613b04d-1d2f-4d5e-a958-308811ae4ff9-kube-api-access-dpj7g\") pod \"console-operator-58897d9998-vn8kx\" (UID: \"f613b04d-1d2f-4d5e-a958-308811ae4ff9\") " pod="openshift-console-operator/console-operator-58897d9998-vn8kx" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499165 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dda6ba11-61ab-4501-afa3-5bb654f352ea-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7xmf5\" (UID: \"dda6ba11-61ab-4501-afa3-5bb654f352ea\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499186 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/03d93fa6-01e1-461e-b5d7-c331afb70cf6-tmpfs\") pod \"packageserver-d55dfcdfc-hplkr\" (UID: \"03d93fa6-01e1-461e-b5d7-c331afb70cf6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499210 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77lxn\" (UniqueName: \"kubernetes.io/projected/c6a1b9ab-e89c-4867-8b96-775cf42abbe3-kube-api-access-77lxn\") pod \"olm-operator-6b444d44fb-bzkz5\" (UID: \"c6a1b9ab-e89c-4867-8b96-775cf42abbe3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzkz5" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499231 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf5a5a06-3af5-4695-a8b9-71d07ba3470b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z2nn6\" (UID: \"cf5a5a06-3af5-4695-a8b9-71d07ba3470b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z2nn6" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499264 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4059e769-c52c-44c3-88c0-35ab870cedf8-default-certificate\") pod \"router-default-5444994796-62pqr\" (UID: \"4059e769-c52c-44c3-88c0-35ab870cedf8\") " pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499296 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499316 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e1bc8a7-4de4-4e7b-8771-c355e67a8279-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2k8t2\" (UID: \"0e1bc8a7-4de4-4e7b-8771-c355e67a8279\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499338 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f613b04d-1d2f-4d5e-a958-308811ae4ff9-trusted-ca\") pod \"console-operator-58897d9998-vn8kx\" (UID: \"f613b04d-1d2f-4d5e-a958-308811ae4ff9\") " pod="openshift-console-operator/console-operator-58897d9998-vn8kx" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499360 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4059e769-c52c-44c3-88c0-35ab870cedf8-metrics-certs\") pod \"router-default-5444994796-62pqr\" (UID: \"4059e769-c52c-44c3-88c0-35ab870cedf8\") " pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499384 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35bbdc02-0a04-4a8f-a8a2-d7586dc04036-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z6bsb\" (UID: \"35bbdc02-0a04-4a8f-a8a2-d7586dc04036\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6bsb" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499418 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-bound-sa-token\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499438 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drfzq\" (UniqueName: \"kubernetes.io/projected/87779486-8ef2-4883-905c-efd17bbfd5ce-kube-api-access-drfzq\") pod \"dns-operator-744455d44c-dvjv9\" (UID: \"87779486-8ef2-4883-905c-efd17bbfd5ce\") " pod="openshift-dns-operator/dns-operator-744455d44c-dvjv9" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499473 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46ddd3f1-b28d-4390-80f5-92990c25a964-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-r7hgr\" (UID: \"46ddd3f1-b28d-4390-80f5-92990c25a964\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r7hgr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499509 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2d158c3-bd29-4c22-97fd-be2db4d77d86-serving-cert\") pod \"service-ca-operator-777779d784-wg85h\" (UID: \"d2d158c3-bd29-4c22-97fd-be2db4d77d86\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wg85h" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499542 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ddd3f1-b28d-4390-80f5-92990c25a964-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-r7hgr\" (UID: \"46ddd3f1-b28d-4390-80f5-92990c25a964\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r7hgr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499564 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d158c3-bd29-4c22-97fd-be2db4d77d86-config\") pod \"service-ca-operator-777779d784-wg85h\" (UID: \"d2d158c3-bd29-4c22-97fd-be2db4d77d86\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wg85h" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499680 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6613fb7b-17c3-422f-bb49-6c960b765e63-serving-cert\") pod \"etcd-operator-b45778765-kj2c4\" (UID: \"6613fb7b-17c3-422f-bb49-6c960b765e63\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499706 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf5a5a06-3af5-4695-a8b9-71d07ba3470b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z2nn6\" (UID: \"cf5a5a06-3af5-4695-a8b9-71d07ba3470b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z2nn6" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.499744 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/03d93fa6-01e1-461e-b5d7-c331afb70cf6-apiservice-cert\") pod \"packageserver-d55dfcdfc-hplkr\" (UID: \"03d93fa6-01e1-461e-b5d7-c331afb70cf6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" Jan 23 06:35:39 crc kubenswrapper[4937]: E0123 06:35:39.514525 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:40.014487943 +0000 UTC m=+139.818254596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.589857 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g9qqj"] Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.602369 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.602942 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pb57\" (UniqueName: \"kubernetes.io/projected/6613fb7b-17c3-422f-bb49-6c960b765e63-kube-api-access-9pb57\") pod \"etcd-operator-b45778765-kj2c4\" (UID: \"6613fb7b-17c3-422f-bb49-6c960b765e63\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.603043 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c6a1b9ab-e89c-4867-8b96-775cf42abbe3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bzkz5\" (UID: \"c6a1b9ab-e89c-4867-8b96-775cf42abbe3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzkz5" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.603072 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-wf8j9"] Jan 23 06:35:39 crc kubenswrapper[4937]: E0123 06:35:39.603199 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:40.1031726 +0000 UTC m=+139.906939253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.603464 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf5a5a06-3af5-4695-a8b9-71d07ba3470b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z2nn6\" (UID: \"cf5a5a06-3af5-4695-a8b9-71d07ba3470b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z2nn6" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.603558 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87md2\" (UniqueName: \"kubernetes.io/projected/52af4349-f45c-4926-8eba-4f68a4feadf3-kube-api-access-87md2\") pod \"machine-config-server-8twcn\" (UID: \"52af4349-f45c-4926-8eba-4f68a4feadf3\") " pod="openshift-machine-config-operator/machine-config-server-8twcn" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.603674 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4059e769-c52c-44c3-88c0-35ab870cedf8-service-ca-bundle\") pod \"router-default-5444994796-62pqr\" (UID: \"4059e769-c52c-44c3-88c0-35ab870cedf8\") " pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.603775 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-trusted-ca\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.603848 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6613fb7b-17c3-422f-bb49-6c960b765e63-etcd-ca\") pod \"etcd-operator-b45778765-kj2c4\" (UID: \"6613fb7b-17c3-422f-bb49-6c960b765e63\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.603929 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/297bbd69-aad2-415f-8b99-9035016c99b9-srv-cert\") pod \"catalog-operator-68c6474976-b6qxt\" (UID: \"297bbd69-aad2-415f-8b99-9035016c99b9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.604017 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ecbb881d-4369-4366-a3ae-e6520b39ef2a-socket-dir\") pod \"csi-hostpathplugin-8d4jd\" (UID: \"ecbb881d-4369-4366-a3ae-e6520b39ef2a\") " pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.604109 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dda6ba11-61ab-4501-afa3-5bb654f352ea-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7xmf5\" (UID: \"dda6ba11-61ab-4501-afa3-5bb654f352ea\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.604193 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/52af4349-f45c-4926-8eba-4f68a4feadf3-certs\") pod \"machine-config-server-8twcn\" (UID: \"52af4349-f45c-4926-8eba-4f68a4feadf3\") " pod="openshift-machine-config-operator/machine-config-server-8twcn" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.604324 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpj7g\" (UniqueName: \"kubernetes.io/projected/f613b04d-1d2f-4d5e-a958-308811ae4ff9-kube-api-access-dpj7g\") pod \"console-operator-58897d9998-vn8kx\" (UID: \"f613b04d-1d2f-4d5e-a958-308811ae4ff9\") " pod="openshift-console-operator/console-operator-58897d9998-vn8kx" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.604431 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/03d93fa6-01e1-461e-b5d7-c331afb70cf6-tmpfs\") pod \"packageserver-d55dfcdfc-hplkr\" (UID: \"03d93fa6-01e1-461e-b5d7-c331afb70cf6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.604571 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77lxn\" (UniqueName: \"kubernetes.io/projected/c6a1b9ab-e89c-4867-8b96-775cf42abbe3-kube-api-access-77lxn\") pod \"olm-operator-6b444d44fb-bzkz5\" (UID: \"c6a1b9ab-e89c-4867-8b96-775cf42abbe3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzkz5" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.604686 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf5a5a06-3af5-4695-a8b9-71d07ba3470b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z2nn6\" (UID: \"cf5a5a06-3af5-4695-a8b9-71d07ba3470b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z2nn6" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.604802 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4059e769-c52c-44c3-88c0-35ab870cedf8-default-certificate\") pod \"router-default-5444994796-62pqr\" (UID: \"4059e769-c52c-44c3-88c0-35ab870cedf8\") " pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.604923 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.605029 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e1bc8a7-4de4-4e7b-8771-c355e67a8279-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2k8t2\" (UID: \"0e1bc8a7-4de4-4e7b-8771-c355e67a8279\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.605124 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f613b04d-1d2f-4d5e-a958-308811ae4ff9-trusted-ca\") pod \"console-operator-58897d9998-vn8kx\" (UID: \"f613b04d-1d2f-4d5e-a958-308811ae4ff9\") " pod="openshift-console-operator/console-operator-58897d9998-vn8kx" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.605215 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4059e769-c52c-44c3-88c0-35ab870cedf8-metrics-certs\") pod \"router-default-5444994796-62pqr\" (UID: \"4059e769-c52c-44c3-88c0-35ab870cedf8\") " pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.605305 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35bbdc02-0a04-4a8f-a8a2-d7586dc04036-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z6bsb\" (UID: \"35bbdc02-0a04-4a8f-a8a2-d7586dc04036\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6bsb" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.605406 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-bound-sa-token\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.605498 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drfzq\" (UniqueName: \"kubernetes.io/projected/87779486-8ef2-4883-905c-efd17bbfd5ce-kube-api-access-drfzq\") pod \"dns-operator-744455d44c-dvjv9\" (UID: \"87779486-8ef2-4883-905c-efd17bbfd5ce\") " pod="openshift-dns-operator/dns-operator-744455d44c-dvjv9" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.605603 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46ddd3f1-b28d-4390-80f5-92990c25a964-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-r7hgr\" (UID: \"46ddd3f1-b28d-4390-80f5-92990c25a964\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r7hgr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.605716 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2d158c3-bd29-4c22-97fd-be2db4d77d86-serving-cert\") pod \"service-ca-operator-777779d784-wg85h\" (UID: \"d2d158c3-bd29-4c22-97fd-be2db4d77d86\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wg85h" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.605805 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ddd3f1-b28d-4390-80f5-92990c25a964-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-r7hgr\" (UID: \"46ddd3f1-b28d-4390-80f5-92990c25a964\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r7hgr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.605878 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d158c3-bd29-4c22-97fd-be2db4d77d86-config\") pod \"service-ca-operator-777779d784-wg85h\" (UID: \"d2d158c3-bd29-4c22-97fd-be2db4d77d86\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wg85h" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.605951 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44269\" (UniqueName: \"kubernetes.io/projected/0df5bf4c-692d-4a83-b70d-47c0d8294395-kube-api-access-44269\") pod \"ingress-canary-qzw6p\" (UID: \"0df5bf4c-692d-4a83-b70d-47c0d8294395\") " pod="openshift-ingress-canary/ingress-canary-qzw6p" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.606040 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6613fb7b-17c3-422f-bb49-6c960b765e63-serving-cert\") pod \"etcd-operator-b45778765-kj2c4\" (UID: \"6613fb7b-17c3-422f-bb49-6c960b765e63\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.606135 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf5a5a06-3af5-4695-a8b9-71d07ba3470b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z2nn6\" (UID: \"cf5a5a06-3af5-4695-a8b9-71d07ba3470b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z2nn6" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.606292 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/03d93fa6-01e1-461e-b5d7-c331afb70cf6-apiservice-cert\") pod \"packageserver-d55dfcdfc-hplkr\" (UID: \"03d93fa6-01e1-461e-b5d7-c331afb70cf6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.606417 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/297bbd69-aad2-415f-8b99-9035016c99b9-profile-collector-cert\") pod \"catalog-operator-68c6474976-b6qxt\" (UID: \"297bbd69-aad2-415f-8b99-9035016c99b9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.606521 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n4bd\" (UniqueName: \"kubernetes.io/projected/35bbdc02-0a04-4a8f-a8a2-d7586dc04036-kube-api-access-5n4bd\") pod \"kube-storage-version-migrator-operator-b67b599dd-z6bsb\" (UID: \"35bbdc02-0a04-4a8f-a8a2-d7586dc04036\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6bsb" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.606702 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqjdj\" (UniqueName: \"kubernetes.io/projected/4059e769-c52c-44c3-88c0-35ab870cedf8-kube-api-access-dqjdj\") pod \"router-default-5444994796-62pqr\" (UID: \"4059e769-c52c-44c3-88c0-35ab870cedf8\") " pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.606848 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf5a5a06-3af5-4695-a8b9-71d07ba3470b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z2nn6\" (UID: \"cf5a5a06-3af5-4695-a8b9-71d07ba3470b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z2nn6" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.606959 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/52af4349-f45c-4926-8eba-4f68a4feadf3-node-bootstrap-token\") pod \"machine-config-server-8twcn\" (UID: \"52af4349-f45c-4926-8eba-4f68a4feadf3\") " pod="openshift-machine-config-operator/machine-config-server-8twcn" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.607059 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a410c4c7-42d4-41e3-bf32-17e9efb91983-metrics-tls\") pod \"dns-default-vvxbh\" (UID: \"a410c4c7-42d4-41e3-bf32-17e9efb91983\") " pod="openshift-dns/dns-default-vvxbh" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.607778 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dda6ba11-61ab-4501-afa3-5bb654f352ea-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7xmf5\" (UID: \"dda6ba11-61ab-4501-afa3-5bb654f352ea\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.607819 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7w5x\" (UniqueName: \"kubernetes.io/projected/46ddd3f1-b28d-4390-80f5-92990c25a964-kube-api-access-n7w5x\") pod \"openshift-controller-manager-operator-756b6f6bc6-r7hgr\" (UID: \"46ddd3f1-b28d-4390-80f5-92990c25a964\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r7hgr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.607850 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f613b04d-1d2f-4d5e-a958-308811ae4ff9-config\") pod \"console-operator-58897d9998-vn8kx\" (UID: \"f613b04d-1d2f-4d5e-a958-308811ae4ff9\") " pod="openshift-console-operator/console-operator-58897d9998-vn8kx" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.607893 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f613b04d-1d2f-4d5e-a958-308811ae4ff9-trusted-ca\") pod \"console-operator-58897d9998-vn8kx\" (UID: \"f613b04d-1d2f-4d5e-a958-308811ae4ff9\") " pod="openshift-console-operator/console-operator-58897d9998-vn8kx" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.607960 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c6a1b9ab-e89c-4867-8b96-775cf42abbe3-srv-cert\") pod \"olm-operator-6b444d44fb-bzkz5\" (UID: \"c6a1b9ab-e89c-4867-8b96-775cf42abbe3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzkz5" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.607995 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a410c4c7-42d4-41e3-bf32-17e9efb91983-config-volume\") pod \"dns-default-vvxbh\" (UID: \"a410c4c7-42d4-41e3-bf32-17e9efb91983\") " pod="openshift-dns/dns-default-vvxbh" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.608071 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f106b4d2-ffb2-4cdb-bb19-3f107ac274af-trusted-ca\") pod \"ingress-operator-5b745b69d9-vd4ks\" (UID: \"f106b4d2-ffb2-4cdb-bb19-3f107ac274af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.608143 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c27qn\" (UniqueName: \"kubernetes.io/projected/f106b4d2-ffb2-4cdb-bb19-3f107ac274af-kube-api-access-c27qn\") pod \"ingress-operator-5b745b69d9-vd4ks\" (UID: \"f106b4d2-ffb2-4cdb-bb19-3f107ac274af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.608182 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f613b04d-1d2f-4d5e-a958-308811ae4ff9-serving-cert\") pod \"console-operator-58897d9998-vn8kx\" (UID: \"f613b04d-1d2f-4d5e-a958-308811ae4ff9\") " pod="openshift-console-operator/console-operator-58897d9998-vn8kx" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.606712 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-trusted-ca\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.608213 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ecbb881d-4369-4366-a3ae-e6520b39ef2a-csi-data-dir\") pod \"csi-hostpathplugin-8d4jd\" (UID: \"ecbb881d-4369-4366-a3ae-e6520b39ef2a\") " pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.605220 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4059e769-c52c-44c3-88c0-35ab870cedf8-service-ca-bundle\") pod \"router-default-5444994796-62pqr\" (UID: \"4059e769-c52c-44c3-88c0-35ab870cedf8\") " pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.608617 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.608700 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87779486-8ef2-4883-905c-efd17bbfd5ce-metrics-tls\") pod \"dns-operator-744455d44c-dvjv9\" (UID: \"87779486-8ef2-4883-905c-efd17bbfd5ce\") " pod="openshift-dns-operator/dns-operator-744455d44c-dvjv9" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.608790 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgc95\" (UniqueName: \"kubernetes.io/projected/dda6ba11-61ab-4501-afa3-5bb654f352ea-kube-api-access-qgc95\") pod \"marketplace-operator-79b997595-7xmf5\" (UID: \"dda6ba11-61ab-4501-afa3-5bb654f352ea\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.608953 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e1bc8a7-4de4-4e7b-8771-c355e67a8279-images\") pod \"machine-config-operator-74547568cd-2k8t2\" (UID: \"0e1bc8a7-4de4-4e7b-8771-c355e67a8279\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.608994 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-registry-tls\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.609014 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6613fb7b-17c3-422f-bb49-6c960b765e63-etcd-service-ca\") pod \"etcd-operator-b45778765-kj2c4\" (UID: \"6613fb7b-17c3-422f-bb49-6c960b765e63\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.609031 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmvtj\" (UniqueName: \"kubernetes.io/projected/d2d158c3-bd29-4c22-97fd-be2db4d77d86-kube-api-access-nmvtj\") pod \"service-ca-operator-777779d784-wg85h\" (UID: \"d2d158c3-bd29-4c22-97fd-be2db4d77d86\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wg85h" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.609072 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6613fb7b-17c3-422f-bb49-6c960b765e63-etcd-client\") pod \"etcd-operator-b45778765-kj2c4\" (UID: \"6613fb7b-17c3-422f-bb49-6c960b765e63\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.609089 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/03d93fa6-01e1-461e-b5d7-c331afb70cf6-webhook-cert\") pod \"packageserver-d55dfcdfc-hplkr\" (UID: \"03d93fa6-01e1-461e-b5d7-c331afb70cf6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.607235 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6613fb7b-17c3-422f-bb49-6c960b765e63-etcd-ca\") pod \"etcd-operator-b45778765-kj2c4\" (UID: \"6613fb7b-17c3-422f-bb49-6c960b765e63\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.611156 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dda6ba11-61ab-4501-afa3-5bb654f352ea-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7xmf5\" (UID: \"dda6ba11-61ab-4501-afa3-5bb654f352ea\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.611490 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c6a1b9ab-e89c-4867-8b96-775cf42abbe3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bzkz5\" (UID: \"c6a1b9ab-e89c-4867-8b96-775cf42abbe3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzkz5" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.611854 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/297bbd69-aad2-415f-8b99-9035016c99b9-srv-cert\") pod \"catalog-operator-68c6474976-b6qxt\" (UID: \"297bbd69-aad2-415f-8b99-9035016c99b9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.612244 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f613b04d-1d2f-4d5e-a958-308811ae4ff9-config\") pod \"console-operator-58897d9998-vn8kx\" (UID: \"f613b04d-1d2f-4d5e-a958-308811ae4ff9\") " pod="openshift-console-operator/console-operator-58897d9998-vn8kx" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.612265 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf5a5a06-3af5-4695-a8b9-71d07ba3470b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z2nn6\" (UID: \"cf5a5a06-3af5-4695-a8b9-71d07ba3470b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z2nn6" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.605917 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/03d93fa6-01e1-461e-b5d7-c331afb70cf6-tmpfs\") pod \"packageserver-d55dfcdfc-hplkr\" (UID: \"03d93fa6-01e1-461e-b5d7-c331afb70cf6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.613017 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f106b4d2-ffb2-4cdb-bb19-3f107ac274af-trusted-ca\") pod \"ingress-operator-5b745b69d9-vd4ks\" (UID: \"f106b4d2-ffb2-4cdb-bb19-3f107ac274af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.613497 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ddd3f1-b28d-4390-80f5-92990c25a964-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-r7hgr\" (UID: \"46ddd3f1-b28d-4390-80f5-92990c25a964\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r7hgr" Jan 23 06:35:39 crc kubenswrapper[4937]: E0123 06:35:39.614411 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:40.114388802 +0000 UTC m=+139.918155455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.614513 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/297bbd69-aad2-415f-8b99-9035016c99b9-profile-collector-cert\") pod \"catalog-operator-68c6474976-b6qxt\" (UID: \"297bbd69-aad2-415f-8b99-9035016c99b9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.614511 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dda6ba11-61ab-4501-afa3-5bb654f352ea-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7xmf5\" (UID: \"dda6ba11-61ab-4501-afa3-5bb654f352ea\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.614935 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46ddd3f1-b28d-4390-80f5-92990c25a964-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-r7hgr\" (UID: \"46ddd3f1-b28d-4390-80f5-92990c25a964\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r7hgr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.617084 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-registry-tls\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.617195 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htx2t\" (UniqueName: \"kubernetes.io/projected/ecbb881d-4369-4366-a3ae-e6520b39ef2a-kube-api-access-htx2t\") pod \"csi-hostpathplugin-8d4jd\" (UID: \"ecbb881d-4369-4366-a3ae-e6520b39ef2a\") " pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.617258 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f613b04d-1d2f-4d5e-a958-308811ae4ff9-serving-cert\") pod \"console-operator-58897d9998-vn8kx\" (UID: \"f613b04d-1d2f-4d5e-a958-308811ae4ff9\") " pod="openshift-console-operator/console-operator-58897d9998-vn8kx" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.619119 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4059e769-c52c-44c3-88c0-35ab870cedf8-metrics-certs\") pod \"router-default-5444994796-62pqr\" (UID: \"4059e769-c52c-44c3-88c0-35ab870cedf8\") " pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.619215 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6613fb7b-17c3-422f-bb49-6c960b765e63-etcd-service-ca\") pod \"etcd-operator-b45778765-kj2c4\" (UID: \"6613fb7b-17c3-422f-bb49-6c960b765e63\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.619523 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6613fb7b-17c3-422f-bb49-6c960b765e63-serving-cert\") pod \"etcd-operator-b45778765-kj2c4\" (UID: \"6613fb7b-17c3-422f-bb49-6c960b765e63\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.619895 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2d158c3-bd29-4c22-97fd-be2db4d77d86-serving-cert\") pod \"service-ca-operator-777779d784-wg85h\" (UID: \"d2d158c3-bd29-4c22-97fd-be2db4d77d86\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wg85h" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.620470 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2d158c3-bd29-4c22-97fd-be2db4d77d86-config\") pod \"service-ca-operator-777779d784-wg85h\" (UID: \"d2d158c3-bd29-4c22-97fd-be2db4d77d86\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wg85h" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.620560 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0e1bc8a7-4de4-4e7b-8771-c355e67a8279-images\") pod \"machine-config-operator-74547568cd-2k8t2\" (UID: \"0e1bc8a7-4de4-4e7b-8771-c355e67a8279\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.620718 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35bbdc02-0a04-4a8f-a8a2-d7586dc04036-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z6bsb\" (UID: \"35bbdc02-0a04-4a8f-a8a2-d7586dc04036\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6bsb" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.620903 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6613fb7b-17c3-422f-bb49-6c960b765e63-etcd-client\") pod \"etcd-operator-b45778765-kj2c4\" (UID: \"6613fb7b-17c3-422f-bb49-6c960b765e63\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.621114 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e1bc8a7-4de4-4e7b-8771-c355e67a8279-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2k8t2\" (UID: \"0e1bc8a7-4de4-4e7b-8771-c355e67a8279\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.621250 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5hmd" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.621574 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-registry-certificates\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.622215 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c6a1b9ab-e89c-4867-8b96-775cf42abbe3-srv-cert\") pod \"olm-operator-6b444d44fb-bzkz5\" (UID: \"c6a1b9ab-e89c-4867-8b96-775cf42abbe3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzkz5" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.622337 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pqzp\" (UniqueName: \"kubernetes.io/projected/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-kube-api-access-7pqzp\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.622490 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e1bc8a7-4de4-4e7b-8771-c355e67a8279-proxy-tls\") pod \"machine-config-operator-74547568cd-2k8t2\" (UID: \"0e1bc8a7-4de4-4e7b-8771-c355e67a8279\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.622525 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ecbb881d-4369-4366-a3ae-e6520b39ef2a-registration-dir\") pod \"csi-hostpathplugin-8d4jd\" (UID: \"ecbb881d-4369-4366-a3ae-e6520b39ef2a\") " pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.622725 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr8d4\" (UniqueName: \"kubernetes.io/projected/0e1bc8a7-4de4-4e7b-8771-c355e67a8279-kube-api-access-vr8d4\") pod \"machine-config-operator-74547568cd-2k8t2\" (UID: \"0e1bc8a7-4de4-4e7b-8771-c355e67a8279\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.623000 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-registry-certificates\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.623285 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0df5bf4c-692d-4a83-b70d-47c0d8294395-cert\") pod \"ingress-canary-qzw6p\" (UID: \"0df5bf4c-692d-4a83-b70d-47c0d8294395\") " pod="openshift-ingress-canary/ingress-canary-qzw6p" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.623488 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87779486-8ef2-4883-905c-efd17bbfd5ce-metrics-tls\") pod \"dns-operator-744455d44c-dvjv9\" (UID: \"87779486-8ef2-4883-905c-efd17bbfd5ce\") " pod="openshift-dns-operator/dns-operator-744455d44c-dvjv9" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.623938 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4059e769-c52c-44c3-88c0-35ab870cedf8-default-certificate\") pod \"router-default-5444994796-62pqr\" (UID: \"4059e769-c52c-44c3-88c0-35ab870cedf8\") " pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.624439 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/03d93fa6-01e1-461e-b5d7-c331afb70cf6-apiservice-cert\") pod \"packageserver-d55dfcdfc-hplkr\" (UID: \"03d93fa6-01e1-461e-b5d7-c331afb70cf6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.624454 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f106b4d2-ffb2-4cdb-bb19-3f107ac274af-metrics-tls\") pod \"ingress-operator-5b745b69d9-vd4ks\" (UID: \"f106b4d2-ffb2-4cdb-bb19-3f107ac274af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.624483 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecae0951-0752-4c13-98ab-6fa8f4f86c33-secret-volume\") pod \"collect-profiles-29485830-txg4v\" (UID: \"ecae0951-0752-4c13-98ab-6fa8f4f86c33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.624503 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxvdx\" (UniqueName: \"kubernetes.io/projected/297bbd69-aad2-415f-8b99-9035016c99b9-kube-api-access-gxvdx\") pod \"catalog-operator-68c6474976-b6qxt\" (UID: \"297bbd69-aad2-415f-8b99-9035016c99b9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.626027 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6613fb7b-17c3-422f-bb49-6c960b765e63-config\") pod \"etcd-operator-b45778765-kj2c4\" (UID: \"6613fb7b-17c3-422f-bb49-6c960b765e63\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.626056 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f106b4d2-ffb2-4cdb-bb19-3f107ac274af-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vd4ks\" (UID: \"f106b4d2-ffb2-4cdb-bb19-3f107ac274af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.626076 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ecbb881d-4369-4366-a3ae-e6520b39ef2a-mountpoint-dir\") pod \"csi-hostpathplugin-8d4jd\" (UID: \"ecbb881d-4369-4366-a3ae-e6520b39ef2a\") " pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.626154 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv7bg\" (UniqueName: \"kubernetes.io/projected/a410c4c7-42d4-41e3-bf32-17e9efb91983-kube-api-access-qv7bg\") pod \"dns-default-vvxbh\" (UID: \"a410c4c7-42d4-41e3-bf32-17e9efb91983\") " pod="openshift-dns/dns-default-vvxbh" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.626211 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm77v\" (UniqueName: \"kubernetes.io/projected/ecae0951-0752-4c13-98ab-6fa8f4f86c33-kube-api-access-jm77v\") pod \"collect-profiles-29485830-txg4v\" (UID: \"ecae0951-0752-4c13-98ab-6fa8f4f86c33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.626240 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rng4c\" (UniqueName: \"kubernetes.io/projected/b43b4dab-66a6-47e9-bb22-5cc338833a5e-kube-api-access-rng4c\") pod \"migrator-59844c95c7-x2zhr\" (UID: \"b43b4dab-66a6-47e9-bb22-5cc338833a5e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x2zhr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.626262 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vggl4\" (UniqueName: \"kubernetes.io/projected/03d93fa6-01e1-461e-b5d7-c331afb70cf6-kube-api-access-vggl4\") pod \"packageserver-d55dfcdfc-hplkr\" (UID: \"03d93fa6-01e1-461e-b5d7-c331afb70cf6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.626280 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35bbdc02-0a04-4a8f-a8a2-d7586dc04036-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z6bsb\" (UID: \"35bbdc02-0a04-4a8f-a8a2-d7586dc04036\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6bsb" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.626313 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ecbb881d-4369-4366-a3ae-e6520b39ef2a-plugins-dir\") pod \"csi-hostpathplugin-8d4jd\" (UID: \"ecbb881d-4369-4366-a3ae-e6520b39ef2a\") " pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.626346 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4059e769-c52c-44c3-88c0-35ab870cedf8-stats-auth\") pod \"router-default-5444994796-62pqr\" (UID: \"4059e769-c52c-44c3-88c0-35ab870cedf8\") " pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.626369 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.626387 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecae0951-0752-4c13-98ab-6fa8f4f86c33-config-volume\") pod \"collect-profiles-29485830-txg4v\" (UID: \"ecae0951-0752-4c13-98ab-6fa8f4f86c33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.627100 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecae0951-0752-4c13-98ab-6fa8f4f86c33-config-volume\") pod \"collect-profiles-29485830-txg4v\" (UID: \"ecae0951-0752-4c13-98ab-6fa8f4f86c33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.627536 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6613fb7b-17c3-422f-bb49-6c960b765e63-config\") pod \"etcd-operator-b45778765-kj2c4\" (UID: \"6613fb7b-17c3-422f-bb49-6c960b765e63\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.628179 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35bbdc02-0a04-4a8f-a8a2-d7586dc04036-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z6bsb\" (UID: \"35bbdc02-0a04-4a8f-a8a2-d7586dc04036\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6bsb" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.629400 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.631109 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e1bc8a7-4de4-4e7b-8771-c355e67a8279-proxy-tls\") pod \"machine-config-operator-74547568cd-2k8t2\" (UID: \"0e1bc8a7-4de4-4e7b-8771-c355e67a8279\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.631239 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecae0951-0752-4c13-98ab-6fa8f4f86c33-secret-volume\") pod \"collect-profiles-29485830-txg4v\" (UID: \"ecae0951-0752-4c13-98ab-6fa8f4f86c33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.631790 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/03d93fa6-01e1-461e-b5d7-c331afb70cf6-webhook-cert\") pod \"packageserver-d55dfcdfc-hplkr\" (UID: \"03d93fa6-01e1-461e-b5d7-c331afb70cf6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.632137 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f106b4d2-ffb2-4cdb-bb19-3f107ac274af-metrics-tls\") pod \"ingress-operator-5b745b69d9-vd4ks\" (UID: \"f106b4d2-ffb2-4cdb-bb19-3f107ac274af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.633787 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4059e769-c52c-44c3-88c0-35ab870cedf8-stats-auth\") pod \"router-default-5444994796-62pqr\" (UID: \"4059e769-c52c-44c3-88c0-35ab870cedf8\") " pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.634235 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.636867 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2t75" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.653569 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pb57\" (UniqueName: \"kubernetes.io/projected/6613fb7b-17c3-422f-bb49-6c960b765e63-kube-api-access-9pb57\") pod \"etcd-operator-b45778765-kj2c4\" (UID: \"6613fb7b-17c3-422f-bb49-6c960b765e63\") " pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.653914 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.679536 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77lxn\" (UniqueName: \"kubernetes.io/projected/c6a1b9ab-e89c-4867-8b96-775cf42abbe3-kube-api-access-77lxn\") pod \"olm-operator-6b444d44fb-bzkz5\" (UID: \"c6a1b9ab-e89c-4867-8b96-775cf42abbe3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzkz5" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.702634 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzkz5" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.723650 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpj7g\" (UniqueName: \"kubernetes.io/projected/f613b04d-1d2f-4d5e-a958-308811ae4ff9-kube-api-access-dpj7g\") pod \"console-operator-58897d9998-vn8kx\" (UID: \"f613b04d-1d2f-4d5e-a958-308811ae4ff9\") " pod="openshift-console-operator/console-operator-58897d9998-vn8kx" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.727904 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:39 crc kubenswrapper[4937]: E0123 06:35:39.728342 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:40.228298893 +0000 UTC m=+140.032065546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.729052 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/52af4349-f45c-4926-8eba-4f68a4feadf3-node-bootstrap-token\") pod \"machine-config-server-8twcn\" (UID: \"52af4349-f45c-4926-8eba-4f68a4feadf3\") " pod="openshift-machine-config-operator/machine-config-server-8twcn" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.729103 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a410c4c7-42d4-41e3-bf32-17e9efb91983-metrics-tls\") pod \"dns-default-vvxbh\" (UID: \"a410c4c7-42d4-41e3-bf32-17e9efb91983\") " pod="openshift-dns/dns-default-vvxbh" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.729145 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a410c4c7-42d4-41e3-bf32-17e9efb91983-config-volume\") pod \"dns-default-vvxbh\" (UID: \"a410c4c7-42d4-41e3-bf32-17e9efb91983\") " pod="openshift-dns/dns-default-vvxbh" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.729170 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ecbb881d-4369-4366-a3ae-e6520b39ef2a-csi-data-dir\") pod \"csi-hostpathplugin-8d4jd\" (UID: \"ecbb881d-4369-4366-a3ae-e6520b39ef2a\") " pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.729206 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.729251 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htx2t\" (UniqueName: \"kubernetes.io/projected/ecbb881d-4369-4366-a3ae-e6520b39ef2a-kube-api-access-htx2t\") pod \"csi-hostpathplugin-8d4jd\" (UID: \"ecbb881d-4369-4366-a3ae-e6520b39ef2a\") " pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.729295 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ecbb881d-4369-4366-a3ae-e6520b39ef2a-registration-dir\") pod \"csi-hostpathplugin-8d4jd\" (UID: \"ecbb881d-4369-4366-a3ae-e6520b39ef2a\") " pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.729338 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0df5bf4c-692d-4a83-b70d-47c0d8294395-cert\") pod \"ingress-canary-qzw6p\" (UID: \"0df5bf4c-692d-4a83-b70d-47c0d8294395\") " pod="openshift-ingress-canary/ingress-canary-qzw6p" Jan 23 06:35:39 crc kubenswrapper[4937]: E0123 06:35:39.729798 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:40.229789876 +0000 UTC m=+140.033556529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.729889 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ecbb881d-4369-4366-a3ae-e6520b39ef2a-csi-data-dir\") pod \"csi-hostpathplugin-8d4jd\" (UID: \"ecbb881d-4369-4366-a3ae-e6520b39ef2a\") " pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.729960 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ecbb881d-4369-4366-a3ae-e6520b39ef2a-registration-dir\") pod \"csi-hostpathplugin-8d4jd\" (UID: \"ecbb881d-4369-4366-a3ae-e6520b39ef2a\") " pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.729989 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv7bg\" (UniqueName: \"kubernetes.io/projected/a410c4c7-42d4-41e3-bf32-17e9efb91983-kube-api-access-qv7bg\") pod \"dns-default-vvxbh\" (UID: \"a410c4c7-42d4-41e3-bf32-17e9efb91983\") " pod="openshift-dns/dns-default-vvxbh" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.730047 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ecbb881d-4369-4366-a3ae-e6520b39ef2a-mountpoint-dir\") pod \"csi-hostpathplugin-8d4jd\" (UID: \"ecbb881d-4369-4366-a3ae-e6520b39ef2a\") " pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.730094 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ecbb881d-4369-4366-a3ae-e6520b39ef2a-plugins-dir\") pod \"csi-hostpathplugin-8d4jd\" (UID: \"ecbb881d-4369-4366-a3ae-e6520b39ef2a\") " pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.730119 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87md2\" (UniqueName: \"kubernetes.io/projected/52af4349-f45c-4926-8eba-4f68a4feadf3-kube-api-access-87md2\") pod \"machine-config-server-8twcn\" (UID: \"52af4349-f45c-4926-8eba-4f68a4feadf3\") " pod="openshift-machine-config-operator/machine-config-server-8twcn" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.730152 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ecbb881d-4369-4366-a3ae-e6520b39ef2a-socket-dir\") pod \"csi-hostpathplugin-8d4jd\" (UID: \"ecbb881d-4369-4366-a3ae-e6520b39ef2a\") " pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.730175 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/52af4349-f45c-4926-8eba-4f68a4feadf3-certs\") pod \"machine-config-server-8twcn\" (UID: \"52af4349-f45c-4926-8eba-4f68a4feadf3\") " pod="openshift-machine-config-operator/machine-config-server-8twcn" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.730193 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ecbb881d-4369-4366-a3ae-e6520b39ef2a-plugins-dir\") pod \"csi-hostpathplugin-8d4jd\" (UID: \"ecbb881d-4369-4366-a3ae-e6520b39ef2a\") " pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.730233 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ecbb881d-4369-4366-a3ae-e6520b39ef2a-mountpoint-dir\") pod \"csi-hostpathplugin-8d4jd\" (UID: \"ecbb881d-4369-4366-a3ae-e6520b39ef2a\") " pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.730256 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44269\" (UniqueName: \"kubernetes.io/projected/0df5bf4c-692d-4a83-b70d-47c0d8294395-kube-api-access-44269\") pod \"ingress-canary-qzw6p\" (UID: \"0df5bf4c-692d-4a83-b70d-47c0d8294395\") " pod="openshift-ingress-canary/ingress-canary-qzw6p" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.730385 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ecbb881d-4369-4366-a3ae-e6520b39ef2a-socket-dir\") pod \"csi-hostpathplugin-8d4jd\" (UID: \"ecbb881d-4369-4366-a3ae-e6520b39ef2a\") " pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.730506 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a410c4c7-42d4-41e3-bf32-17e9efb91983-config-volume\") pod \"dns-default-vvxbh\" (UID: \"a410c4c7-42d4-41e3-bf32-17e9efb91983\") " pod="openshift-dns/dns-default-vvxbh" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.730805 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drfzq\" (UniqueName: \"kubernetes.io/projected/87779486-8ef2-4883-905c-efd17bbfd5ce-kube-api-access-drfzq\") pod \"dns-operator-744455d44c-dvjv9\" (UID: \"87779486-8ef2-4883-905c-efd17bbfd5ce\") " pod="openshift-dns-operator/dns-operator-744455d44c-dvjv9" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.732226 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqjdj\" (UniqueName: \"kubernetes.io/projected/4059e769-c52c-44c3-88c0-35ab870cedf8-kube-api-access-dqjdj\") pod \"router-default-5444994796-62pqr\" (UID: \"4059e769-c52c-44c3-88c0-35ab870cedf8\") " pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.735450 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/52af4349-f45c-4926-8eba-4f68a4feadf3-node-bootstrap-token\") pod \"machine-config-server-8twcn\" (UID: \"52af4349-f45c-4926-8eba-4f68a4feadf3\") " pod="openshift-machine-config-operator/machine-config-server-8twcn" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.740079 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/52af4349-f45c-4926-8eba-4f68a4feadf3-certs\") pod \"machine-config-server-8twcn\" (UID: \"52af4349-f45c-4926-8eba-4f68a4feadf3\") " pod="openshift-machine-config-operator/machine-config-server-8twcn" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.742519 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a410c4c7-42d4-41e3-bf32-17e9efb91983-metrics-tls\") pod \"dns-default-vvxbh\" (UID: \"a410c4c7-42d4-41e3-bf32-17e9efb91983\") " pod="openshift-dns/dns-default-vvxbh" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.744209 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0df5bf4c-692d-4a83-b70d-47c0d8294395-cert\") pod \"ingress-canary-qzw6p\" (UID: \"0df5bf4c-692d-4a83-b70d-47c0d8294395\") " pod="openshift-ingress-canary/ingress-canary-qzw6p" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.755391 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n4bd\" (UniqueName: \"kubernetes.io/projected/35bbdc02-0a04-4a8f-a8a2-d7586dc04036-kube-api-access-5n4bd\") pod \"kube-storage-version-migrator-operator-b67b599dd-z6bsb\" (UID: \"35bbdc02-0a04-4a8f-a8a2-d7586dc04036\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6bsb" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.768517 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgc95\" (UniqueName: \"kubernetes.io/projected/dda6ba11-61ab-4501-afa3-5bb654f352ea-kube-api-access-qgc95\") pod \"marketplace-operator-79b997595-7xmf5\" (UID: \"dda6ba11-61ab-4501-afa3-5bb654f352ea\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.769674 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ss7mm"] Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.799555 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-c9fhv"] Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.803643 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-bound-sa-token\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.806190 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7w5x\" (UniqueName: \"kubernetes.io/projected/46ddd3f1-b28d-4390-80f5-92990c25a964-kube-api-access-n7w5x\") pod \"openshift-controller-manager-operator-756b6f6bc6-r7hgr\" (UID: \"46ddd3f1-b28d-4390-80f5-92990c25a964\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r7hgr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.826225 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf5a5a06-3af5-4695-a8b9-71d07ba3470b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z2nn6\" (UID: \"cf5a5a06-3af5-4695-a8b9-71d07ba3470b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z2nn6" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.831321 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:39 crc kubenswrapper[4937]: E0123 06:35:39.831737 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:40.331720203 +0000 UTC m=+140.135486856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.839201 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmvtj\" (UniqueName: \"kubernetes.io/projected/d2d158c3-bd29-4c22-97fd-be2db4d77d86-kube-api-access-nmvtj\") pod \"service-ca-operator-777779d784-wg85h\" (UID: \"d2d158c3-bd29-4c22-97fd-be2db4d77d86\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wg85h" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.883675 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c27qn\" (UniqueName: \"kubernetes.io/projected/f106b4d2-ffb2-4cdb-bb19-3f107ac274af-kube-api-access-c27qn\") pod \"ingress-operator-5b745b69d9-vd4ks\" (UID: \"f106b4d2-ffb2-4cdb-bb19-3f107ac274af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.889115 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vn8kx" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.895956 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.903018 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r7hgr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.922889 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z2nn6" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.923479 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr8d4\" (UniqueName: \"kubernetes.io/projected/0e1bc8a7-4de4-4e7b-8771-c355e67a8279-kube-api-access-vr8d4\") pod \"machine-config-operator-74547568cd-2k8t2\" (UID: \"0e1bc8a7-4de4-4e7b-8771-c355e67a8279\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.930778 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ffdpb"] Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.932994 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:39 crc kubenswrapper[4937]: E0123 06:35:39.933263 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:40.433250478 +0000 UTC m=+140.237017131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.943003 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-dvjv9" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.945569 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxvdx\" (UniqueName: \"kubernetes.io/projected/297bbd69-aad2-415f-8b99-9035016c99b9-kube-api-access-gxvdx\") pod \"catalog-operator-68c6474976-b6qxt\" (UID: \"297bbd69-aad2-415f-8b99-9035016c99b9\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.956939 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6bsb" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.970766 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rng4c\" (UniqueName: \"kubernetes.io/projected/b43b4dab-66a6-47e9-bb22-5cc338833a5e-kube-api-access-rng4c\") pod \"migrator-59844c95c7-x2zhr\" (UID: \"b43b4dab-66a6-47e9-bb22-5cc338833a5e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x2zhr" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.982196 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f106b4d2-ffb2-4cdb-bb19-3f107ac274af-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vd4ks\" (UID: \"f106b4d2-ffb2-4cdb-bb19-3f107ac274af\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks" Jan 23 06:35:39 crc kubenswrapper[4937]: I0123 06:35:39.989112 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2" Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.007133 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vggl4\" (UniqueName: \"kubernetes.io/projected/03d93fa6-01e1-461e-b5d7-c331afb70cf6-kube-api-access-vggl4\") pod \"packageserver-d55dfcdfc-hplkr\" (UID: \"03d93fa6-01e1-461e-b5d7-c331afb70cf6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.009371 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt" Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.030987 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.036904 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:40 crc kubenswrapper[4937]: E0123 06:35:40.037462 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:40.537423179 +0000 UTC m=+140.341189832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.040169 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x2zhr" Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.049436 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.061288 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv7bg\" (UniqueName: \"kubernetes.io/projected/a410c4c7-42d4-41e3-bf32-17e9efb91983-kube-api-access-qv7bg\") pod \"dns-default-vvxbh\" (UID: \"a410c4c7-42d4-41e3-bf32-17e9efb91983\") " pod="openshift-dns/dns-default-vvxbh" Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.065279 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wg85h" Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.081342 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44269\" (UniqueName: \"kubernetes.io/projected/0df5bf4c-692d-4a83-b70d-47c0d8294395-kube-api-access-44269\") pod \"ingress-canary-qzw6p\" (UID: \"0df5bf4c-692d-4a83-b70d-47c0d8294395\") " pod="openshift-ingress-canary/ingress-canary-qzw6p" Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.101404 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htx2t\" (UniqueName: \"kubernetes.io/projected/ecbb881d-4369-4366-a3ae-e6520b39ef2a-kube-api-access-htx2t\") pod \"csi-hostpathplugin-8d4jd\" (UID: \"ecbb881d-4369-4366-a3ae-e6520b39ef2a\") " pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.105335 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.112119 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vvxbh" Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.139141 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:40 crc kubenswrapper[4937]: E0123 06:35:40.139510 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:40.63949431 +0000 UTC m=+140.443260963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.230437 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks" Jan 23 06:35:40 crc kubenswrapper[4937]: E0123 06:35:40.241987 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:40.741954133 +0000 UTC m=+140.545720786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.242575 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.242960 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:40 crc kubenswrapper[4937]: E0123 06:35:40.243361 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:40.743350123 +0000 UTC m=+140.547116966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.248951 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7"] Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.344046 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:40 crc kubenswrapper[4937]: E0123 06:35:40.344291 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:40.844255511 +0000 UTC m=+140.648022164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.344553 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:40 crc kubenswrapper[4937]: E0123 06:35:40.345190 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:40.845166416 +0000 UTC m=+140.648933109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.370049 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qzw6p" Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.408425 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wf8j9" event={"ID":"d59eeb02-0c89-4608-98c6-78a5b88cdd5c","Type":"ContainerStarted","Data":"e1badf6e44850ece184f3ef6da885a31c8f831059fe68af0ee12404f56252b5c"} Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.411551 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7zfr" event={"ID":"7be36b21-cbe7-4374-a796-8974ac58d8ed","Type":"ContainerStarted","Data":"84c1af53009da5dc39dc075d01448fc9f9cf1d61c735a32231770647f7815898"} Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.412637 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" event={"ID":"d422a1a4-490a-4905-9093-fd39409c55b7","Type":"ContainerStarted","Data":"3a412982b5f17bbf8a2ca666f8a1665f3b00b9f3edc31db7a2c5ea86000ba6d7"} Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.413665 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" event={"ID":"8bda434a-853e-4281-9e1b-1d79f81f6856","Type":"ContainerStarted","Data":"075027acb379c0c88b278018b7697aa5bb51c196eec7164669426b9741e12aed"} Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.438462 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6n269"] Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.446037 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:40 crc kubenswrapper[4937]: E0123 06:35:40.446191 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:40.946160946 +0000 UTC m=+140.749927589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.446443 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:40 crc kubenswrapper[4937]: E0123 06:35:40.446800 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:40.946792375 +0000 UTC m=+140.750559028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.447450 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx"] Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.451082 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qztwb"] Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.453037 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2h4rf"] Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.455035 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6g59k"] Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.465310 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bbtnp"] Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.475368 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mvhd8"] Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.478899 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-n9444"] Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.485003 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4"] Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.487546 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzkz5"] Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.493447 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rsth7"] Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.493585 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-62bgj"] Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.496716 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88"] Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.496747 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-kj2c4"] Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.504020 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2t75"] Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.505584 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5hmd"] Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.547495 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:40 crc kubenswrapper[4937]: E0123 06:35:40.547713 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:41.047673391 +0000 UTC m=+140.851440044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.547846 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:40 crc kubenswrapper[4937]: E0123 06:35:40.548297 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:41.048288439 +0000 UTC m=+140.852055092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.648876 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:40 crc kubenswrapper[4937]: E0123 06:35:40.649097 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:41.149065393 +0000 UTC m=+140.952832046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.649217 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:40 crc kubenswrapper[4937]: E0123 06:35:40.649789 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:41.149763983 +0000 UTC m=+140.953530636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.750784 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:40 crc kubenswrapper[4937]: E0123 06:35:40.751326 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:41.251271687 +0000 UTC m=+141.055038390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.801068 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87md2\" (UniqueName: \"kubernetes.io/projected/52af4349-f45c-4926-8eba-4f68a4feadf3-kube-api-access-87md2\") pod \"machine-config-server-8twcn\" (UID: \"52af4349-f45c-4926-8eba-4f68a4feadf3\") " pod="openshift-machine-config-operator/machine-config-server-8twcn" Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.801143 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pqzp\" (UniqueName: \"kubernetes.io/projected/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-kube-api-access-7pqzp\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.806791 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm77v\" (UniqueName: \"kubernetes.io/projected/ecae0951-0752-4c13-98ab-6fa8f4f86c33-kube-api-access-jm77v\") pod \"collect-profiles-29485830-txg4v\" (UID: \"ecae0951-0752-4c13-98ab-6fa8f4f86c33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v" Jan 23 06:35:40 crc kubenswrapper[4937]: W0123 06:35:40.814682 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod066984c2_78f9_456c_b263_5f108f7be481.slice/crio-1be73c40efe7b9c3682384ee4eda312b98518951db5bd53bc085ecac40c9bd3a WatchSource:0}: Error finding container 1be73c40efe7b9c3682384ee4eda312b98518951db5bd53bc085ecac40c9bd3a: Status 404 returned error can't find the container with id 1be73c40efe7b9c3682384ee4eda312b98518951db5bd53bc085ecac40c9bd3a Jan 23 06:35:40 crc kubenswrapper[4937]: W0123 06:35:40.820250 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod863e1fad_048e_4104_aa38_ca05ffec260a.slice/crio-dfd396ae0ccfb7502208a66650eac2f4237a7173b500717bd46f64cdfce98c79 WatchSource:0}: Error finding container dfd396ae0ccfb7502208a66650eac2f4237a7173b500717bd46f64cdfce98c79: Status 404 returned error can't find the container with id dfd396ae0ccfb7502208a66650eac2f4237a7173b500717bd46f64cdfce98c79 Jan 23 06:35:40 crc kubenswrapper[4937]: W0123 06:35:40.822048 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda88c49d5_e615_4c41_972e_3a0ddcadfd53.slice/crio-50783287e8f92574100d0401f3ac1731291064ca4bf5e0e399111ad8988a1065 WatchSource:0}: Error finding container 50783287e8f92574100d0401f3ac1731291064ca4bf5e0e399111ad8988a1065: Status 404 returned error can't find the container with id 50783287e8f92574100d0401f3ac1731291064ca4bf5e0e399111ad8988a1065 Jan 23 06:35:40 crc kubenswrapper[4937]: W0123 06:35:40.825331 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebfb85b7_e4ba_4391_82c1_ccf6f0143d3a.slice/crio-76724f447e1be6923570dc9cd5806e591b35d6b08fd5256b17ae701722379104 WatchSource:0}: Error finding container 76724f447e1be6923570dc9cd5806e591b35d6b08fd5256b17ae701722379104: Status 404 returned error can't find the container with id 76724f447e1be6923570dc9cd5806e591b35d6b08fd5256b17ae701722379104 Jan 23 06:35:40 crc kubenswrapper[4937]: W0123 06:35:40.832477 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4867dee_6837_4bb3_b7c4_18d9842a005a.slice/crio-5d85e661a5c3bb07f96527499fd1b91ef2091859babe46e65b70150557af15b4 WatchSource:0}: Error finding container 5d85e661a5c3bb07f96527499fd1b91ef2091859babe46e65b70150557af15b4: Status 404 returned error can't find the container with id 5d85e661a5c3bb07f96527499fd1b91ef2091859babe46e65b70150557af15b4 Jan 23 06:35:40 crc kubenswrapper[4937]: W0123 06:35:40.845964 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b2914a6_9bc3_4d47_b951_07cd54c2f8e4.slice/crio-776d6ecb0bef2659679ebf5329d370e385dac57f5f20627fb976b1084a0442f3 WatchSource:0}: Error finding container 776d6ecb0bef2659679ebf5329d370e385dac57f5f20627fb976b1084a0442f3: Status 404 returned error can't find the container with id 776d6ecb0bef2659679ebf5329d370e385dac57f5f20627fb976b1084a0442f3 Jan 23 06:35:40 crc kubenswrapper[4937]: W0123 06:35:40.847910 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3168738e_e0e3_43d7_bae7_79276263bb8e.slice/crio-106ee9d5f51d9f42f8f66e48c628e5fb74f32bce48804d89527b43173d9a04b2 WatchSource:0}: Error finding container 106ee9d5f51d9f42f8f66e48c628e5fb74f32bce48804d89527b43173d9a04b2: Status 404 returned error can't find the container with id 106ee9d5f51d9f42f8f66e48c628e5fb74f32bce48804d89527b43173d9a04b2 Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.852686 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:40 crc kubenswrapper[4937]: E0123 06:35:40.853327 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:41.353295107 +0000 UTC m=+141.157061860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:40 crc kubenswrapper[4937]: W0123 06:35:40.854477 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30f2f2b2_8cb3_47cf_a066_87ddfdd40201.slice/crio-4e74379f11690259a0774f99f649cb001dc77c86f175531e0653860869de376e WatchSource:0}: Error finding container 4e74379f11690259a0774f99f649cb001dc77c86f175531e0653860869de376e: Status 404 returned error can't find the container with id 4e74379f11690259a0774f99f649cb001dc77c86f175531e0653860869de376e Jan 23 06:35:40 crc kubenswrapper[4937]: W0123 06:35:40.863323 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51aa0e66_0340_4981_8a2a_b5248a6dd4bb.slice/crio-1bd7f41385322dd08edd7d623ea72b5a9da4e3d037c9e16c890c094a04743147 WatchSource:0}: Error finding container 1bd7f41385322dd08edd7d623ea72b5a9da4e3d037c9e16c890c094a04743147: Status 404 returned error can't find the container with id 1bd7f41385322dd08edd7d623ea72b5a9da4e3d037c9e16c890c094a04743147 Jan 23 06:35:40 crc kubenswrapper[4937]: W0123 06:35:40.863819 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6a1b9ab_e89c_4867_8b96_775cf42abbe3.slice/crio-f45a3594a7dddcd87b3d028ab4990432b33303c82aa1f634c3ecee4e1ecae7e5 WatchSource:0}: Error finding container f45a3594a7dddcd87b3d028ab4990432b33303c82aa1f634c3ecee4e1ecae7e5: Status 404 returned error can't find the container with id f45a3594a7dddcd87b3d028ab4990432b33303c82aa1f634c3ecee4e1ecae7e5 Jan 23 06:35:40 crc kubenswrapper[4937]: W0123 06:35:40.893386 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod581a142b_da2e_47b5_a96e_bdf19302b1f9.slice/crio-70ff8cf601c45d167520066080a79d419157553fcac36529aa27532f16cab720 WatchSource:0}: Error finding container 70ff8cf601c45d167520066080a79d419157553fcac36529aa27532f16cab720: Status 404 returned error can't find the container with id 70ff8cf601c45d167520066080a79d419157553fcac36529aa27532f16cab720 Jan 23 06:35:40 crc kubenswrapper[4937]: W0123 06:35:40.895554 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaec15e62_dae9_4706_86f7_32c738b82ead.slice/crio-89ae49ef5ce6931afc2dfe469d9ffa2030c2cd5ab4e3fa2919845b1eb80cda0f WatchSource:0}: Error finding container 89ae49ef5ce6931afc2dfe469d9ffa2030c2cd5ab4e3fa2919845b1eb80cda0f: Status 404 returned error can't find the container with id 89ae49ef5ce6931afc2dfe469d9ffa2030c2cd5ab4e3fa2919845b1eb80cda0f Jan 23 06:35:40 crc kubenswrapper[4937]: W0123 06:35:40.896737 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b53c885_35c8_44b5_87d8_be092079dd0d.slice/crio-9eba5aca639b62f5d2b356b8c9fe2d920cfbe408d3677f04494d97457d9ce243 WatchSource:0}: Error finding container 9eba5aca639b62f5d2b356b8c9fe2d920cfbe408d3677f04494d97457d9ce243: Status 404 returned error can't find the container with id 9eba5aca639b62f5d2b356b8c9fe2d920cfbe408d3677f04494d97457d9ce243 Jan 23 06:35:40 crc kubenswrapper[4937]: W0123 06:35:40.914618 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d1dd039_603f_4e23_982c_f2661f163a0d.slice/crio-4d4037d41fb144ff823c3b9f9e2863d12e4594feebc5d8821cab3b0ab2eeb4a3 WatchSource:0}: Error finding container 4d4037d41fb144ff823c3b9f9e2863d12e4594feebc5d8821cab3b0ab2eeb4a3: Status 404 returned error can't find the container with id 4d4037d41fb144ff823c3b9f9e2863d12e4594feebc5d8821cab3b0ab2eeb4a3 Jan 23 06:35:40 crc kubenswrapper[4937]: W0123 06:35:40.922112 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55a813a9_1cae_4e6f_9f77_c407d0068c92.slice/crio-a42062176baf67288d61864d70b8d0879c9aa6eab3b39f9274f8068e00b812ac WatchSource:0}: Error finding container a42062176baf67288d61864d70b8d0879c9aa6eab3b39f9274f8068e00b812ac: Status 404 returned error can't find the container with id a42062176baf67288d61864d70b8d0879c9aa6eab3b39f9274f8068e00b812ac Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.923729 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v" Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.954560 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:40 crc kubenswrapper[4937]: E0123 06:35:40.954857 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:41.454810963 +0000 UTC m=+141.258577636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.955053 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:40 crc kubenswrapper[4937]: E0123 06:35:40.955539 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:41.455522633 +0000 UTC m=+141.259289466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:40 crc kubenswrapper[4937]: I0123 06:35:40.977109 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8twcn" Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.057075 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:41 crc kubenswrapper[4937]: E0123 06:35:41.057331 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:41.557291805 +0000 UTC m=+141.361058458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.058883 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:41 crc kubenswrapper[4937]: E0123 06:35:41.059456 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:41.559447767 +0000 UTC m=+141.363214420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.161074 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:41 crc kubenswrapper[4937]: E0123 06:35:41.161329 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:41.661301952 +0000 UTC m=+141.465068605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.163260 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:41 crc kubenswrapper[4937]: E0123 06:35:41.163664 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:41.663651429 +0000 UTC m=+141.467418082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.264795 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:41 crc kubenswrapper[4937]: E0123 06:35:41.265199 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:41.765180784 +0000 UTC m=+141.568947437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.329056 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6bsb"] Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.366496 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:41 crc kubenswrapper[4937]: E0123 06:35:41.367247 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:41.867233695 +0000 UTC m=+141.671000338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.467847 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:41 crc kubenswrapper[4937]: E0123 06:35:41.468297 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:41.968277446 +0000 UTC m=+141.772044099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.491770 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wg85h"] Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.496068 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7zfr" event={"ID":"7be36b21-cbe7-4374-a796-8974ac58d8ed","Type":"ContainerStarted","Data":"0cb9acef425e179ad6ceec460c356a5e9e4494039d2945e51fe55aeab6e751c0"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.518214 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" event={"ID":"066984c2-78f9-456c-b263-5f108f7be481","Type":"ContainerStarted","Data":"1be73c40efe7b9c3682384ee4eda312b98518951db5bd53bc085ecac40c9bd3a"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.530780 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" event={"ID":"8bda434a-853e-4281-9e1b-1d79f81f6856","Type":"ContainerStarted","Data":"bf91efd215656bfe1972f9cad504ffd41f10506d0032378d6e13bdb48aba6f73"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.540008 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-62bgj" event={"ID":"aec15e62-dae9-4706-86f7-32c738b82ead","Type":"ContainerStarted","Data":"89ae49ef5ce6931afc2dfe469d9ffa2030c2cd5ab4e3fa2919845b1eb80cda0f"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.541432 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bbtnp" event={"ID":"3168738e-e0e3-43d7-bae7-79276263bb8e","Type":"ContainerStarted","Data":"106ee9d5f51d9f42f8f66e48c628e5fb74f32bce48804d89527b43173d9a04b2"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.542898 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ss7mm" event={"ID":"d0a141fd-dece-4e1c-8cb3-794c0b3f6b21","Type":"ContainerStarted","Data":"0e370c193881f8cefb633b3994927cfa44498cc87808c3cba24dcec00fd8012d"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.547549 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.549350 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2h4rf" event={"ID":"1b2914a6-9bc3-4d47-b951-07cd54c2f8e4","Type":"ContainerStarted","Data":"776d6ecb0bef2659679ebf5329d370e385dac57f5f20627fb976b1084a0442f3"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.550339 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6g59k" event={"ID":"57c396e4-f445-46c6-b636-cfe96d07cc43","Type":"ContainerStarted","Data":"016726d710ccef6dbb2765cbd81c0293e15ce1104ffe9160b82787e9766c3b0c"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.552474 4937 generic.go:334] "Generic (PLEG): container finished" podID="d422a1a4-490a-4905-9093-fd39409c55b7" containerID="afb32a1c256bfdd912df7ef1ac43ae7d88391416eb73e23c5772cf837a2305e2" exitCode=0 Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.552524 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" event={"ID":"d422a1a4-490a-4905-9093-fd39409c55b7","Type":"ContainerDied","Data":"afb32a1c256bfdd912df7ef1ac43ae7d88391416eb73e23c5772cf837a2305e2"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.554187 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.570771 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.570842 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-62pqr" event={"ID":"4059e769-c52c-44c3-88c0-35ab870cedf8","Type":"ContainerStarted","Data":"68f35ad4c6401ff272922c88eab3dedd8d3cf5d594c2640b90ab4023819acc85"} Jan 23 06:35:41 crc kubenswrapper[4937]: E0123 06:35:41.571240 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:42.071222903 +0000 UTC m=+141.874989556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.583710 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qztwb" event={"ID":"ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a","Type":"ContainerStarted","Data":"76724f447e1be6923570dc9cd5806e591b35d6b08fd5256b17ae701722379104"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.587578 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5hmd" event={"ID":"1d1dd039-603f-4e23-982c-f2661f163a0d","Type":"ContainerStarted","Data":"4d4037d41fb144ff823c3b9f9e2863d12e4594feebc5d8821cab3b0ab2eeb4a3"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.609095 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" event={"ID":"6613fb7b-17c3-422f-bb49-6c960b765e63","Type":"ContainerStarted","Data":"ba67d0624b9eab7b51d2194f6793ab3740d983ce6416f6ee9d0daaf554657d55"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.611905 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2t75" event={"ID":"55a813a9-1cae-4e6f-9f77-c407d0068c92","Type":"ContainerStarted","Data":"a42062176baf67288d61864d70b8d0879c9aa6eab3b39f9274f8068e00b812ac"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.661316 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88" event={"ID":"1b53c885-35c8-44b5-87d8-be092079dd0d","Type":"ContainerStarted","Data":"9eba5aca639b62f5d2b356b8c9fe2d920cfbe408d3677f04494d97457d9ce243"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.675357 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:41 crc kubenswrapper[4937]: E0123 06:35:41.676655 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:42.176636899 +0000 UTC m=+141.980403552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.676936 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n9444" event={"ID":"581a142b-da2e-47b5-a96e-bdf19302b1f9","Type":"ContainerStarted","Data":"70ff8cf601c45d167520066080a79d419157553fcac36529aa27532f16cab720"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.744437 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" event={"ID":"c4867dee-6837-4bb3-b7c4-18d9842a005a","Type":"ContainerStarted","Data":"5d85e661a5c3bb07f96527499fd1b91ef2091859babe46e65b70150557af15b4"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.772792 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" event={"ID":"863e1fad-048e-4104-aa38-ca05ffec260a","Type":"ContainerStarted","Data":"dfd396ae0ccfb7502208a66650eac2f4237a7173b500717bd46f64cdfce98c79"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.773311 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.782514 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:41 crc kubenswrapper[4937]: E0123 06:35:41.782962 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:42.282948412 +0000 UTC m=+142.086715065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.786697 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4" event={"ID":"30f2f2b2-8cb3-47cf-a066-87ddfdd40201","Type":"ContainerStarted","Data":"4e74379f11690259a0774f99f649cb001dc77c86f175531e0653860869de376e"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.815740 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mvhd8" event={"ID":"51aa0e66-0340-4981-8a2a-b5248a6dd4bb","Type":"ContainerStarted","Data":"1bd7f41385322dd08edd7d623ea72b5a9da4e3d037c9e16c890c094a04743147"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.839573 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wf8j9" event={"ID":"d59eeb02-0c89-4608-98c6-78a5b88cdd5c","Type":"ContainerStarted","Data":"a4a661d179b993a105b5881ba69d842c6934e1fddaa2caabfccd617749f86fab"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.842199 4937 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ds4g7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.842254 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" podUID="863e1fad-048e-4104-aa38-ca05ffec260a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.843220 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v"] Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.876918 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" event={"ID":"5faf4ca9-b656-45c0-8a88-9fc38060e5a9","Type":"ContainerStarted","Data":"2f37fff48a47624620c7712fb6e13c47257ac28b44f1eb961acfe0cb78384dea"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.886447 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:41 crc kubenswrapper[4937]: E0123 06:35:41.888110 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:42.388084191 +0000 UTC m=+142.191850844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.907660 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzkz5" event={"ID":"c6a1b9ab-e89c-4867-8b96-775cf42abbe3","Type":"ContainerStarted","Data":"f45a3594a7dddcd87b3d028ab4990432b33303c82aa1f634c3ecee4e1ecae7e5"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.960490 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6n269" event={"ID":"a88c49d5-e615-4c41-972e-3a0ddcadfd53","Type":"ContainerStarted","Data":"50783287e8f92574100d0401f3ac1731291064ca4bf5e0e399111ad8988a1065"} Jan 23 06:35:41 crc kubenswrapper[4937]: I0123 06:35:41.989392 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:41 crc kubenswrapper[4937]: E0123 06:35:41.990017 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:42.489972427 +0000 UTC m=+142.293739080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:42 crc kubenswrapper[4937]: I0123 06:35:42.058755 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" podStartSLOduration=121.058731051 podStartE2EDuration="2m1.058731051s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:42.056457607 +0000 UTC m=+141.860224270" watchObservedRunningTime="2026-01-23 06:35:42.058731051 +0000 UTC m=+141.862497704" Jan 23 06:35:42 crc kubenswrapper[4937]: I0123 06:35:42.096806 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:42 crc kubenswrapper[4937]: E0123 06:35:42.100204 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:42.600175042 +0000 UTC m=+142.403941885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:42 crc kubenswrapper[4937]: I0123 06:35:42.103106 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-6n269" podStartSLOduration=121.103084535 podStartE2EDuration="2m1.103084535s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:42.098791192 +0000 UTC m=+141.902557845" watchObservedRunningTime="2026-01-23 06:35:42.103084535 +0000 UTC m=+141.906851188" Jan 23 06:35:42 crc kubenswrapper[4937]: I0123 06:35:42.180099 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" podStartSLOduration=121.180055756 podStartE2EDuration="2m1.180055756s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:42.15895461 +0000 UTC m=+141.962721263" watchObservedRunningTime="2026-01-23 06:35:42.180055756 +0000 UTC m=+141.983822409" Jan 23 06:35:42 crc kubenswrapper[4937]: I0123 06:35:42.199350 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:42 crc kubenswrapper[4937]: E0123 06:35:42.209534 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:42.709506381 +0000 UTC m=+142.513273034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:42 crc kubenswrapper[4937]: I0123 06:35:42.229457 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vn8kx"] Jan 23 06:35:42 crc kubenswrapper[4937]: I0123 06:35:42.301018 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:42 crc kubenswrapper[4937]: E0123 06:35:42.302121 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:42.80209957 +0000 UTC m=+142.605866223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:42 crc kubenswrapper[4937]: I0123 06:35:42.324758 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr"] Jan 23 06:35:42 crc kubenswrapper[4937]: I0123 06:35:42.405438 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:42 crc kubenswrapper[4937]: E0123 06:35:42.406122 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:42.906102426 +0000 UTC m=+142.709869079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:42 crc kubenswrapper[4937]: I0123 06:35:42.510989 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:42 crc kubenswrapper[4937]: E0123 06:35:42.511467 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:43.011447751 +0000 UTC m=+142.815214404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:42 crc kubenswrapper[4937]: I0123 06:35:42.636636 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:42 crc kubenswrapper[4937]: E0123 06:35:42.637132 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:43.13711099 +0000 UTC m=+142.940877693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:42 crc kubenswrapper[4937]: W0123 06:35:42.730009 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03d93fa6_01e1_461e_b5d7_c331afb70cf6.slice/crio-5ff301981bd928d896307e10f393754cea476bdbb8e7c40944e2d2822d2574c1 WatchSource:0}: Error finding container 5ff301981bd928d896307e10f393754cea476bdbb8e7c40944e2d2822d2574c1: Status 404 returned error can't find the container with id 5ff301981bd928d896307e10f393754cea476bdbb8e7c40944e2d2822d2574c1 Jan 23 06:35:42 crc kubenswrapper[4937]: I0123 06:35:42.740100 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:42 crc kubenswrapper[4937]: E0123 06:35:42.740532 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:43.240516779 +0000 UTC m=+143.044283432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:42 crc kubenswrapper[4937]: I0123 06:35:42.765097 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7xmf5"] Jan 23 06:35:42 crc kubenswrapper[4937]: I0123 06:35:42.826390 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8d4jd"] Jan 23 06:35:42 crc kubenswrapper[4937]: I0123 06:35:42.842019 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:42 crc kubenswrapper[4937]: E0123 06:35:42.842372 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:43.342360314 +0000 UTC m=+143.146126967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:42 crc kubenswrapper[4937]: I0123 06:35:42.847301 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks"] Jan 23 06:35:42 crc kubenswrapper[4937]: I0123 06:35:42.943099 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:42 crc kubenswrapper[4937]: E0123 06:35:42.943264 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:43.44322821 +0000 UTC m=+143.246994863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:42 crc kubenswrapper[4937]: I0123 06:35:42.943812 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:42 crc kubenswrapper[4937]: E0123 06:35:42.944257 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:43.444241319 +0000 UTC m=+143.248007972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:42 crc kubenswrapper[4937]: I0123 06:35:42.997551 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vvxbh"] Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.034343 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt"] Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.044871 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:43 crc kubenswrapper[4937]: E0123 06:35:43.045327 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:43.545308741 +0000 UTC m=+143.349075394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.060414 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" event={"ID":"03d93fa6-01e1-461e-b5d7-c331afb70cf6","Type":"ContainerStarted","Data":"5ff301981bd928d896307e10f393754cea476bdbb8e7c40944e2d2822d2574c1"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.071446 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dvjv9"] Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.085106 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z2nn6"] Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.097776 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-wf8j9" event={"ID":"d59eeb02-0c89-4608-98c6-78a5b88cdd5c","Type":"ContainerStarted","Data":"bceddbe856a5ed1914023078a9875a180b934f67d1b4347d1d0da5e8eadf0ffe"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.114245 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" event={"ID":"5faf4ca9-b656-45c0-8a88-9fc38060e5a9","Type":"ContainerStarted","Data":"e29107b543feb2d4e0b23210ab08c44ea3027ba177e9bea2bfd400d4c92db518"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.115573 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.119775 4937 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rsth7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.119854 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" podUID="5faf4ca9-b656-45c0-8a88-9fc38060e5a9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.128645 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-wf8j9" podStartSLOduration=122.128621553 podStartE2EDuration="2m2.128621553s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:43.128076798 +0000 UTC m=+142.931843451" watchObservedRunningTime="2026-01-23 06:35:43.128621553 +0000 UTC m=+142.932388206" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.148425 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vn8kx" event={"ID":"f613b04d-1d2f-4d5e-a958-308811ae4ff9","Type":"ContainerStarted","Data":"98dc37c84d6c1b69bf7157f637164222a8815afe994921c516d5d6f0ce9036b5"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.152170 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:43 crc kubenswrapper[4937]: E0123 06:35:43.152767 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:43.652751987 +0000 UTC m=+143.456518640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.179800 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6g59k" event={"ID":"57c396e4-f445-46c6-b636-cfe96d07cc43","Type":"ContainerStarted","Data":"752f04b6d6ed1b86f045f14a6eeb80f6327af9823e0f7c21bf8c49354b28f223"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.181477 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" podStartSLOduration=122.181463491 podStartE2EDuration="2m2.181463491s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:43.172746741 +0000 UTC m=+142.976513394" watchObservedRunningTime="2026-01-23 06:35:43.181463491 +0000 UTC m=+142.985230134" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.184951 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" event={"ID":"ecbb881d-4369-4366-a3ae-e6520b39ef2a","Type":"ContainerStarted","Data":"1903cf35c5464d7920f3a6fb0fee7ac3c88d4e084362dd8e08f9c6f1785b34e0"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.186570 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-6g59k" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.187301 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qztwb" event={"ID":"ebfb85b7-e4ba-4391-82c1-ccf6f0143d3a","Type":"ContainerStarted","Data":"a3a4f1b862044be07d9e60ebda93d171b095b86390ec92a0e8c1cab35c568695"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.189871 4937 patch_prober.go:28] interesting pod/downloads-7954f5f757-6g59k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.189941 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6g59k" podUID="57c396e4-f445-46c6-b636-cfe96d07cc43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.194352 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8twcn" event={"ID":"52af4349-f45c-4926-8eba-4f68a4feadf3","Type":"ContainerStarted","Data":"f63f0db22f89fda9beff55d73b742311278a5526a46abd4d9c57d374088eb4e0"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.203373 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6bsb" event={"ID":"35bbdc02-0a04-4a8f-a8a2-d7586dc04036","Type":"ContainerStarted","Data":"b721abd2d413ae32c1b6195a01510fe6b09214578935ccedab3a22d46bb84053"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.227857 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n9444" event={"ID":"581a142b-da2e-47b5-a96e-bdf19302b1f9","Type":"ContainerStarted","Data":"9b488bd0d22744831654679b3b1669f79c1dbf70b44ac411b6da8b2be10fabc8"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.241191 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-x2zhr"] Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.247856 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzkz5" event={"ID":"c6a1b9ab-e89c-4867-8b96-775cf42abbe3","Type":"ContainerStarted","Data":"8a1026b286bc82f5ad95c9cab56e2d84a310f2ec91d31a0ec2be4aaa3d137237"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.260090 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzkz5" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.261981 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-6g59k" podStartSLOduration=122.261966543 podStartE2EDuration="2m2.261966543s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:43.228349067 +0000 UTC m=+143.032115730" watchObservedRunningTime="2026-01-23 06:35:43.261966543 +0000 UTC m=+143.065733196" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.264773 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:43 crc kubenswrapper[4937]: E0123 06:35:43.290510 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:43.790473951 +0000 UTC m=+143.594240594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.304479 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2"] Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.318286 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qztwb" podStartSLOduration=122.318259889 podStartE2EDuration="2m2.318259889s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:43.280744262 +0000 UTC m=+143.084510915" watchObservedRunningTime="2026-01-23 06:35:43.318259889 +0000 UTC m=+143.122026542" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.386405 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzkz5" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.386789 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bbtnp" event={"ID":"3168738e-e0e3-43d7-bae7-79276263bb8e","Type":"ContainerStarted","Data":"62a2af03a8bb63d3e06eb176de96a64f4ba7abb3c0e9820b39fe933172ab12f2"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.398797 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r7hgr"] Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.400078 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.402656 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6bsb" podStartSLOduration=122.402629872 podStartE2EDuration="2m2.402629872s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:43.385892971 +0000 UTC m=+143.189659624" watchObservedRunningTime="2026-01-23 06:35:43.402629872 +0000 UTC m=+143.206396525" Jan 23 06:35:43 crc kubenswrapper[4937]: E0123 06:35:43.403119 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:43.903095745 +0000 UTC m=+143.706862398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.418723 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qzw6p"] Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.464847 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks" event={"ID":"f106b4d2-ffb2-4cdb-bb19-3f107ac274af","Type":"ContainerStarted","Data":"f265744d797557d9f82731ee7281a1dbc6ed82a2417970720ddf6c2c9f17f221"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.507504 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:43 crc kubenswrapper[4937]: E0123 06:35:43.508445 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:44.008400949 +0000 UTC m=+143.812167612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.509508 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" event={"ID":"863e1fad-048e-4104-aa38-ca05ffec260a","Type":"ContainerStarted","Data":"cb234850be169a8a9756978094fab729b5502623075e87aa57e39f2ce653c98a"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.512561 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v" event={"ID":"ecae0951-0752-4c13-98ab-6fa8f4f86c33","Type":"ContainerStarted","Data":"e5c2f73f84f25619dc53d78b85e155633f2280abd574eaa29b22af02b7fa2e55"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.514134 4937 generic.go:334] "Generic (PLEG): container finished" podID="30f2f2b2-8cb3-47cf-a066-87ddfdd40201" containerID="d6c28a8bfb7c56fc790f962c06028eeffa128d0d9a2710bb9666f1f60541094b" exitCode=0 Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.514209 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4" event={"ID":"30f2f2b2-8cb3-47cf-a066-87ddfdd40201","Type":"ContainerDied","Data":"d6c28a8bfb7c56fc790f962c06028eeffa128d0d9a2710bb9666f1f60541094b"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.520123 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" event={"ID":"dda6ba11-61ab-4501-afa3-5bb654f352ea","Type":"ContainerStarted","Data":"865e5620e83c0c3d592b1381ef914f01be3ed643d6b02d61c548ba8fbabe6593"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.556751 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6n269" event={"ID":"a88c49d5-e615-4c41-972e-3a0ddcadfd53","Type":"ContainerStarted","Data":"057840d57903a8c6cc87b59fff4bc90592a2f6c83ba1a3dc0eb70ea6db5e4722"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.559123 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.566850 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-62bgj" event={"ID":"aec15e62-dae9-4706-86f7-32c738b82ead","Type":"ContainerStarted","Data":"7a5ba4be17729bfa861439f25830a1c2103f8ff9b0f437efaed61f419c5c12b5"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.590069 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-62pqr" event={"ID":"4059e769-c52c-44c3-88c0-35ab870cedf8","Type":"ContainerStarted","Data":"fd5d9bd49643b446cb5724e27c802bb828fed01df8fc6d6d140265c6ad7bc777"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.600042 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bbtnp" podStartSLOduration=122.60002059 podStartE2EDuration="2m2.60002059s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:43.599019821 +0000 UTC m=+143.402786474" watchObservedRunningTime="2026-01-23 06:35:43.60002059 +0000 UTC m=+143.403787243" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.600331 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzkz5" podStartSLOduration=122.600327778 podStartE2EDuration="2m2.600327778s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:43.540393867 +0000 UTC m=+143.344160520" watchObservedRunningTime="2026-01-23 06:35:43.600327778 +0000 UTC m=+143.404094431" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.602880 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88" event={"ID":"1b53c885-35c8-44b5-87d8-be092079dd0d","Type":"ContainerStarted","Data":"524aa9d0a9d58fb94c0d25280f3338b885c62846594bbe0db40cc3b9249f9fda"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.615499 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:43 crc kubenswrapper[4937]: E0123 06:35:43.616307 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:44.116286387 +0000 UTC m=+143.920053030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.629362 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wg85h" event={"ID":"d2d158c3-bd29-4c22-97fd-be2db4d77d86","Type":"ContainerStarted","Data":"d08e1cea786950d54ca162a756ea14c925ffbd163b090d9c0fc8f5e891f5b551"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.672674 4937 generic.go:334] "Generic (PLEG): container finished" podID="c4867dee-6837-4bb3-b7c4-18d9842a005a" containerID="08f1fc95887efcd09692b55e87c960601354ed429399b34fbe09a7c8404d3fff" exitCode=0 Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.673573 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" event={"ID":"c4867dee-6837-4bb3-b7c4-18d9842a005a","Type":"ContainerDied","Data":"08f1fc95887efcd09692b55e87c960601354ed429399b34fbe09a7c8404d3fff"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.716059 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-56z88" podStartSLOduration=122.716037141 podStartE2EDuration="2m2.716037141s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:43.715862466 +0000 UTC m=+143.519629129" watchObservedRunningTime="2026-01-23 06:35:43.716037141 +0000 UTC m=+143.519803794" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.716503 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:43 crc kubenswrapper[4937]: E0123 06:35:43.717359 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:44.217342119 +0000 UTC m=+144.021108772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.747804 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mvhd8" event={"ID":"51aa0e66-0340-4981-8a2a-b5248a6dd4bb","Type":"ContainerStarted","Data":"1f32e175f2d9e0741d9dcf5fee48a4cb3d69e7176edd3618e6cf21728a7670b0"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.748012 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mvhd8" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.749620 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" event={"ID":"066984c2-78f9-456c-b263-5f108f7be481","Type":"ContainerStarted","Data":"dc04ee9d03c8a87195178c02ffcadd30b2a6a92f3c5e4ff16c64320b50100118"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.789734 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ss7mm" event={"ID":"d0a141fd-dece-4e1c-8cb3-794c0b3f6b21","Type":"ContainerStarted","Data":"f0469f2a5d1f669dde2f9d4f6a2eee8714dcb3872996a8293d61babb1f8020fa"} Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.804319 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-62bgj" podStartSLOduration=122.804292226 podStartE2EDuration="2m2.804292226s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:43.761555668 +0000 UTC m=+143.565322321" watchObservedRunningTime="2026-01-23 06:35:43.804292226 +0000 UTC m=+143.608058879" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.818909 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:43 crc kubenswrapper[4937]: E0123 06:35:43.819756 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:44.319740329 +0000 UTC m=+144.123507012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.852694 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-62pqr" podStartSLOduration=122.852668974 podStartE2EDuration="2m2.852668974s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:43.807550229 +0000 UTC m=+143.611316882" watchObservedRunningTime="2026-01-23 06:35:43.852668974 +0000 UTC m=+143.656435627" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.898671 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.901919 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-ffdpb" podStartSLOduration=122.901908219 podStartE2EDuration="2m2.901908219s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:43.899976703 +0000 UTC m=+143.703743356" watchObservedRunningTime="2026-01-23 06:35:43.901908219 +0000 UTC m=+143.705674872" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.910833 4937 patch_prober.go:28] interesting pod/router-default-5444994796-62pqr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:35:43 crc kubenswrapper[4937]: [-]has-synced failed: reason withheld Jan 23 06:35:43 crc kubenswrapper[4937]: [+]process-running ok Jan 23 06:35:43 crc kubenswrapper[4937]: healthz check failed Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.910894 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62pqr" podUID="4059e769-c52c-44c3-88c0-35ab870cedf8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.929917 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:43 crc kubenswrapper[4937]: E0123 06:35:43.932439 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:44.432422594 +0000 UTC m=+144.236189247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:43 crc kubenswrapper[4937]: I0123 06:35:43.974544 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mvhd8" podStartSLOduration=122.974517203 podStartE2EDuration="2m2.974517203s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:43.963113806 +0000 UTC m=+143.766880459" watchObservedRunningTime="2026-01-23 06:35:43.974517203 +0000 UTC m=+143.778283856" Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.032228 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:44 crc kubenswrapper[4937]: E0123 06:35:44.032572 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:44.53255704 +0000 UTC m=+144.336323693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.133559 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:44 crc kubenswrapper[4937]: E0123 06:35:44.133902 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:44.633876089 +0000 UTC m=+144.437642752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.134288 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:44 crc kubenswrapper[4937]: E0123 06:35:44.134746 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:44.634736734 +0000 UTC m=+144.438503387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.247196 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:44 crc kubenswrapper[4937]: E0123 06:35:44.249769 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:44.749716526 +0000 UTC m=+144.553483179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.249962 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:44 crc kubenswrapper[4937]: E0123 06:35:44.251114 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:44.751093725 +0000 UTC m=+144.554860378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.354130 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:44 crc kubenswrapper[4937]: E0123 06:35:44.354457 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:44.854440083 +0000 UTC m=+144.658206736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.458520 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:44 crc kubenswrapper[4937]: E0123 06:35:44.459667 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:44.959647664 +0000 UTC m=+144.763414317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.562126 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:44 crc kubenswrapper[4937]: E0123 06:35:44.562469 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:45.062453116 +0000 UTC m=+144.866219769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.664690 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:44 crc kubenswrapper[4937]: E0123 06:35:44.665128 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:45.165107304 +0000 UTC m=+144.968873957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.769993 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:44 crc kubenswrapper[4937]: E0123 06:35:44.770387 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:45.270360636 +0000 UTC m=+145.074127289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.795613 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dvjv9" event={"ID":"87779486-8ef2-4883-905c-efd17bbfd5ce","Type":"ContainerStarted","Data":"099978b60d3f5ddd5bc7b096e11e4e7fe99f6e9c764ab4f278e42281a1eaa5e4"} Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.795668 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dvjv9" event={"ID":"87779486-8ef2-4883-905c-efd17bbfd5ce","Type":"ContainerStarted","Data":"684681d276b8d81d15368644dfd863142b81c080304f66039455dc6c719396ad"} Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.825557 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks" event={"ID":"f106b4d2-ffb2-4cdb-bb19-3f107ac274af","Type":"ContainerStarted","Data":"82e4b5c7d9dea6489f86a84dc9493d94268293b6232dcdf141aab548257c2142"} Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.855950 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4" event={"ID":"30f2f2b2-8cb3-47cf-a066-87ddfdd40201","Type":"ContainerStarted","Data":"a33364b823802561dac5c70560eca09f93ab4099e8dfe614e3f78db40a79dfe4"} Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.856012 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4" Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.866976 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2" event={"ID":"0e1bc8a7-4de4-4e7b-8771-c355e67a8279","Type":"ContainerStarted","Data":"fb747423ccf95226a92e5007a0b61ffe28d235e28dc530d51bc8d9de18782d6c"} Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.867033 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2" event={"ID":"0e1bc8a7-4de4-4e7b-8771-c355e67a8279","Type":"ContainerStarted","Data":"6316b17919f36a780b6352d2f9b39fbfce7edd77c376df8508643b7e6c90299e"} Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.882189 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:44 crc kubenswrapper[4937]: E0123 06:35:44.883331 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:45.383315819 +0000 UTC m=+145.187082472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.908747 4937 patch_prober.go:28] interesting pod/router-default-5444994796-62pqr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:35:44 crc kubenswrapper[4937]: [-]has-synced failed: reason withheld Jan 23 06:35:44 crc kubenswrapper[4937]: [+]process-running ok Jan 23 06:35:44 crc kubenswrapper[4937]: healthz check failed Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.908818 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62pqr" podUID="4059e769-c52c-44c3-88c0-35ab870cedf8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.921140 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2h4rf" event={"ID":"1b2914a6-9bc3-4d47-b951-07cd54c2f8e4","Type":"ContainerStarted","Data":"56a9f8e2f0964556b66298532f9d8b2d011670c796d98dd4cc625a82ffd94e8c"} Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.983865 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:44 crc kubenswrapper[4937]: E0123 06:35:44.985014 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:45.484991939 +0000 UTC m=+145.288758592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.997824 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" event={"ID":"03d93fa6-01e1-461e-b5d7-c331afb70cf6","Type":"ContainerStarted","Data":"752f660f83eee2528317690137c13994572398f1d1257d7f063391ed255449df"} Jan 23 06:35:44 crc kubenswrapper[4937]: I0123 06:35:44.998166 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.000358 4937 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hplkr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.000401 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" podUID="03d93fa6-01e1-461e-b5d7-c331afb70cf6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.042844 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt" event={"ID":"297bbd69-aad2-415f-8b99-9035016c99b9","Type":"ContainerStarted","Data":"07ea73efcbb87b5921b2bd8f4e450e2addef3be80db9a596e80444450dd537fe"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.042922 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.042936 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt" event={"ID":"297bbd69-aad2-415f-8b99-9035016c99b9","Type":"ContainerStarted","Data":"6f3190e828978d1127819949468b7e1b45c3f43bad614a533046bd57a9b94ac8"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.049232 4937 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-b6qxt container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.049292 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt" podUID="297bbd69-aad2-415f-8b99-9035016c99b9" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.059069 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vn8kx" event={"ID":"f613b04d-1d2f-4d5e-a958-308811ae4ff9","Type":"ContainerStarted","Data":"f15ec42e89526e316c330b6165983b037cecc960425bc6f41f56a730f9dc49e5"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.059125 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-vn8kx" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.066267 4937 patch_prober.go:28] interesting pod/console-operator-58897d9998-vn8kx container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.066340 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vn8kx" podUID="f613b04d-1d2f-4d5e-a958-308811ae4ff9" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.068467 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4" podStartSLOduration=124.068450756 podStartE2EDuration="2m4.068450756s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:44.901802521 +0000 UTC m=+144.705569174" watchObservedRunningTime="2026-01-23 06:35:45.068450756 +0000 UTC m=+144.872217409" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.068651 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" podStartSLOduration=124.068646902 podStartE2EDuration="2m4.068646902s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:45.06684721 +0000 UTC m=+144.870613863" watchObservedRunningTime="2026-01-23 06:35:45.068646902 +0000 UTC m=+144.872413555" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.089959 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:45 crc kubenswrapper[4937]: E0123 06:35:45.091722 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:45.591708813 +0000 UTC m=+145.395475466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.102541 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v" event={"ID":"ecae0951-0752-4c13-98ab-6fa8f4f86c33","Type":"ContainerStarted","Data":"f819f4315b10e5f76bf96cfb7c0ea4f0e81f6a3794753f525b4786a3f2b6bd89"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.145504 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x2zhr" event={"ID":"b43b4dab-66a6-47e9-bb22-5cc338833a5e","Type":"ContainerStarted","Data":"0abc458e2e9deee0d4573658b78276f116076050905e84d1924f465e05875c98"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.193046 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.193994 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ss7mm" event={"ID":"d0a141fd-dece-4e1c-8cb3-794c0b3f6b21","Type":"ContainerStarted","Data":"7b2ca53a8799596ba43d13a36b3f3b2995b825615d641bd3364fdfca8bc14dcd"} Jan 23 06:35:45 crc kubenswrapper[4937]: E0123 06:35:45.194344 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:45.694316951 +0000 UTC m=+145.498083604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.212478 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n9444" event={"ID":"581a142b-da2e-47b5-a96e-bdf19302b1f9","Type":"ContainerStarted","Data":"347ad09c2545cb41c82304fbcee3c316b168afd12561b831e1b0dbbe6518fb2c"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.229606 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt" podStartSLOduration=124.229557012 podStartE2EDuration="2m4.229557012s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:45.227727899 +0000 UTC m=+145.031494552" watchObservedRunningTime="2026-01-23 06:35:45.229557012 +0000 UTC m=+145.033323665" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.230617 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-vn8kx" podStartSLOduration=124.230609523 podStartE2EDuration="2m4.230609523s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:45.138977641 +0000 UTC m=+144.942744294" watchObservedRunningTime="2026-01-23 06:35:45.230609523 +0000 UTC m=+145.034376176" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.245449 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qzw6p" event={"ID":"0df5bf4c-692d-4a83-b70d-47c0d8294395","Type":"ContainerStarted","Data":"1463cdc610e2806e8fa3da856d618b0adee4b1dd5e7fdd6c5a0ce8d9719e841b"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.276520 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v" podStartSLOduration=124.27650094 podStartE2EDuration="2m4.27650094s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:45.27403037 +0000 UTC m=+145.077797023" watchObservedRunningTime="2026-01-23 06:35:45.27650094 +0000 UTC m=+145.080267593" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.284037 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" event={"ID":"dda6ba11-61ab-4501-afa3-5bb654f352ea","Type":"ContainerStarted","Data":"8472eb38158528f591e151fb55471c861b866310e1c09e0a8ead7e695c108f56"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.285154 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.292682 4937 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xmf5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.292759 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" podUID="dda6ba11-61ab-4501-afa3-5bb654f352ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.294986 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z6bsb" event={"ID":"35bbdc02-0a04-4a8f-a8a2-d7586dc04036","Type":"ContainerStarted","Data":"bde3a94bf42fab4e1f3084998c1367f4a3c4f5e6bb32a82a2ade3be50709de1f"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.299609 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:45 crc kubenswrapper[4937]: E0123 06:35:45.301705 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:45.801688093 +0000 UTC m=+145.605454746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.306457 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r7hgr" event={"ID":"46ddd3f1-b28d-4390-80f5-92990c25a964","Type":"ContainerStarted","Data":"d12441852465203dcea50febef03ded7f007f8634d847c45444c52e51c18850a"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.306516 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r7hgr" event={"ID":"46ddd3f1-b28d-4390-80f5-92990c25a964","Type":"ContainerStarted","Data":"e072c5db1c1fe915aaa1beda575ea9450985dc604131039438dec14fecd3a93b"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.335261 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" event={"ID":"d422a1a4-490a-4905-9093-fd39409c55b7","Type":"ContainerStarted","Data":"0f5e92f8e649b6d8120bdcd8cf769259096cad5bbf7e0000ebba89713e366dfd"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.358316 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n9444" podStartSLOduration=124.358293989 podStartE2EDuration="2m4.358293989s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:45.322268034 +0000 UTC m=+145.126034687" watchObservedRunningTime="2026-01-23 06:35:45.358293989 +0000 UTC m=+145.162060642" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.386787 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ss7mm" podStartSLOduration=124.386763536 podStartE2EDuration="2m4.386763536s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:45.36009605 +0000 UTC m=+145.163862703" watchObservedRunningTime="2026-01-23 06:35:45.386763536 +0000 UTC m=+145.190530199" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.387711 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-qzw6p" podStartSLOduration=9.387704784 podStartE2EDuration="9.387704784s" podCreationTimestamp="2026-01-23 06:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:45.384403158 +0000 UTC m=+145.188169801" watchObservedRunningTime="2026-01-23 06:35:45.387704784 +0000 UTC m=+145.191471437" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.394955 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mvhd8" event={"ID":"51aa0e66-0340-4981-8a2a-b5248a6dd4bb","Type":"ContainerStarted","Data":"0de265696abddd606a5a2dce8027524171ea59703333965d3ed006e813169d6c"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.398546 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vvxbh" event={"ID":"a410c4c7-42d4-41e3-bf32-17e9efb91983","Type":"ContainerStarted","Data":"f38a0625c507bb4b6de69e7cbda0f6aa077c3f540a6fc4a382318e876208559a"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.398578 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vvxbh" event={"ID":"a410c4c7-42d4-41e3-bf32-17e9efb91983","Type":"ContainerStarted","Data":"2edb770ddcc65ced4afb25182427e39ee6fdd3635595e5107b4233392cfbaef0"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.400725 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wg85h" event={"ID":"d2d158c3-bd29-4c22-97fd-be2db4d77d86","Type":"ContainerStarted","Data":"06d11f737b9823cae76a82c45cfdf02325ed27d2cec7c88e9c2f2f56d8b80dcc"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.404321 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:45 crc kubenswrapper[4937]: E0123 06:35:45.405651 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:45.905633879 +0000 UTC m=+145.709400522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.434486 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7zfr" event={"ID":"7be36b21-cbe7-4374-a796-8974ac58d8ed","Type":"ContainerStarted","Data":"bc9e38ab82134f4c6f596d7dd02e893ece35871c0c092bc10f59e67aabfd6655"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.465693 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z2nn6" event={"ID":"cf5a5a06-3af5-4695-a8b9-71d07ba3470b","Type":"ContainerStarted","Data":"8855f563cd42bd79c8425066683b834b69557a9be56a341039ec183234b8120e"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.476060 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-r7hgr" podStartSLOduration=124.47604359 podStartE2EDuration="2m4.47604359s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:45.433063036 +0000 UTC m=+145.236829679" watchObservedRunningTime="2026-01-23 06:35:45.47604359 +0000 UTC m=+145.279810243" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.479217 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" podStartSLOduration=124.479207651 podStartE2EDuration="2m4.479207651s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:45.477965845 +0000 UTC m=+145.281732508" watchObservedRunningTime="2026-01-23 06:35:45.479207651 +0000 UTC m=+145.282974304" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.507476 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:45 crc kubenswrapper[4937]: E0123 06:35:45.509823 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:46.00980192 +0000 UTC m=+145.813568573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.553502 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z2nn6" podStartSLOduration=124.553485184 podStartE2EDuration="2m4.553485184s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:45.553050121 +0000 UTC m=+145.356816774" watchObservedRunningTime="2026-01-23 06:35:45.553485184 +0000 UTC m=+145.357251837" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.555546 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wg85h" podStartSLOduration=124.555531863 podStartE2EDuration="2m4.555531863s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:45.517211312 +0000 UTC m=+145.320977965" watchObservedRunningTime="2026-01-23 06:35:45.555531863 +0000 UTC m=+145.359298516" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.579925 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5hmd" event={"ID":"1d1dd039-603f-4e23-982c-f2661f163a0d","Type":"ContainerStarted","Data":"df6cebc61a02d061e10f76ba33fb7ea07fca385bed960a4ec5f37a2931ff7f1d"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.592747 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2t75" event={"ID":"55a813a9-1cae-4e6f-9f77-c407d0068c92","Type":"ContainerStarted","Data":"2401db63d822a69bba687c065089593b8cc78c117946b1fbbb77ec9a42829ffa"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.613406 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:45 crc kubenswrapper[4937]: E0123 06:35:45.614738 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:46.114716122 +0000 UTC m=+145.918482775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.645627 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8twcn" event={"ID":"52af4349-f45c-4926-8eba-4f68a4feadf3","Type":"ContainerStarted","Data":"d55ed9767938684729fe42b4a04ca073848688aef6bd286b6c5801619d03e892"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.654632 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-z7zfr" podStartSLOduration=124.654614317 podStartE2EDuration="2m4.654614317s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:45.640248155 +0000 UTC m=+145.444014808" watchObservedRunningTime="2026-01-23 06:35:45.654614317 +0000 UTC m=+145.458380970" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.658241 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" event={"ID":"6613fb7b-17c3-422f-bb49-6c960b765e63","Type":"ContainerStarted","Data":"4404c163812513006246e58cee2b0e76820b06415778dd7c97dabbf70357eb41"} Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.660255 4937 patch_prober.go:28] interesting pod/downloads-7954f5f757-6g59k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.660323 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6g59k" podUID="57c396e4-f445-46c6-b636-cfe96d07cc43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.669488 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.718694 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:45 crc kubenswrapper[4937]: E0123 06:35:45.720928 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:46.220915952 +0000 UTC m=+146.024682605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.740944 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-p2t75" podStartSLOduration=124.740912876 podStartE2EDuration="2m4.740912876s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:45.725761321 +0000 UTC m=+145.529527974" watchObservedRunningTime="2026-01-23 06:35:45.740912876 +0000 UTC m=+145.544679519" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.741993 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5hmd" podStartSLOduration=124.741987667 podStartE2EDuration="2m4.741987667s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:45.684238859 +0000 UTC m=+145.488005512" watchObservedRunningTime="2026-01-23 06:35:45.741987667 +0000 UTC m=+145.545754320" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.772655 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-kj2c4" podStartSLOduration=124.772625877 podStartE2EDuration="2m4.772625877s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:45.769794135 +0000 UTC m=+145.573560788" watchObservedRunningTime="2026-01-23 06:35:45.772625877 +0000 UTC m=+145.576392530" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.820611 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:45 crc kubenswrapper[4937]: E0123 06:35:45.821040 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:46.321017576 +0000 UTC m=+146.124784229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.845984 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8twcn" podStartSLOduration=9.845959292 podStartE2EDuration="9.845959292s" podCreationTimestamp="2026-01-23 06:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:45.843782459 +0000 UTC m=+145.647549112" watchObservedRunningTime="2026-01-23 06:35:45.845959292 +0000 UTC m=+145.649725945" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.912858 4937 patch_prober.go:28] interesting pod/router-default-5444994796-62pqr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:35:45 crc kubenswrapper[4937]: [-]has-synced failed: reason withheld Jan 23 06:35:45 crc kubenswrapper[4937]: [+]process-running ok Jan 23 06:35:45 crc kubenswrapper[4937]: healthz check failed Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.912924 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62pqr" podUID="4059e769-c52c-44c3-88c0-35ab870cedf8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:35:45 crc kubenswrapper[4937]: I0123 06:35:45.922997 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:45 crc kubenswrapper[4937]: E0123 06:35:45.923377 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:46.423360784 +0000 UTC m=+146.227127437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.024531 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:46 crc kubenswrapper[4937]: E0123 06:35:46.024765 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:46.524729896 +0000 UTC m=+146.328496559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.024933 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:46 crc kubenswrapper[4937]: E0123 06:35:46.025404 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:46.525394665 +0000 UTC m=+146.329161318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.099059 4937 csr.go:261] certificate signing request csr-lwn9h is approved, waiting to be issued Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.117147 4937 csr.go:257] certificate signing request csr-lwn9h is issued Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.125481 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:46 crc kubenswrapper[4937]: E0123 06:35:46.125652 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:46.625633963 +0000 UTC m=+146.429400616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.125715 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:46 crc kubenswrapper[4937]: E0123 06:35:46.126080 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:46.626057365 +0000 UTC m=+146.429824018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.227371 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:46 crc kubenswrapper[4937]: E0123 06:35:46.227912 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:46.72789312 +0000 UTC m=+146.531659773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.329715 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:46 crc kubenswrapper[4937]: E0123 06:35:46.330150 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:46.830125025 +0000 UTC m=+146.633891848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.430662 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:46 crc kubenswrapper[4937]: E0123 06:35:46.430972 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:46.9309537 +0000 UTC m=+146.734720353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.531885 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:46 crc kubenswrapper[4937]: E0123 06:35:46.532242 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:47.032229338 +0000 UTC m=+146.835995991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.632990 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:46 crc kubenswrapper[4937]: E0123 06:35:46.633221 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:47.133189117 +0000 UTC m=+146.936955770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.633357 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:46 crc kubenswrapper[4937]: E0123 06:35:46.633751 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:47.133736914 +0000 UTC m=+146.937503567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.686467 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2h4rf" event={"ID":"1b2914a6-9bc3-4d47-b951-07cd54c2f8e4","Type":"ContainerStarted","Data":"fe030f02a996853fc05269996d5c852ce35c62894f7d271276f261825aed7957"} Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.688504 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qzw6p" event={"ID":"0df5bf4c-692d-4a83-b70d-47c0d8294395","Type":"ContainerStarted","Data":"afd79c1dced833c1c697997eee6102675368704e995596653a6608f8b568f25d"} Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.704110 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vvxbh" event={"ID":"a410c4c7-42d4-41e3-bf32-17e9efb91983","Type":"ContainerStarted","Data":"7e0b95d36fe52e7cbc38c8b0f844930678f66bfa2e9e684e435cf5abd7859f8a"} Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.705050 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-vvxbh" Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.717772 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-2h4rf" podStartSLOduration=125.717742136 podStartE2EDuration="2m5.717742136s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:46.714152112 +0000 UTC m=+146.517918775" watchObservedRunningTime="2026-01-23 06:35:46.717742136 +0000 UTC m=+146.521508789" Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.720176 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" event={"ID":"c4867dee-6837-4bb3-b7c4-18d9842a005a","Type":"ContainerStarted","Data":"97da8aeba8ac5beba971826bb544a5353cc5d270362b840b2efda458d8945a5b"} Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.726631 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z2nn6" event={"ID":"cf5a5a06-3af5-4695-a8b9-71d07ba3470b","Type":"ContainerStarted","Data":"a153f6189debcb9fe4970a81121549d75787bf12ccba80e7fc921cfaa17518e0"} Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.734152 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.734337 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" event={"ID":"ecbb881d-4369-4366-a3ae-e6520b39ef2a","Type":"ContainerStarted","Data":"b787da3c26dd784ff0393490ce64486455d2f98e467cca6f28adfbb3f0c3e2cd"} Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.734374 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" event={"ID":"ecbb881d-4369-4366-a3ae-e6520b39ef2a","Type":"ContainerStarted","Data":"3c36b130c0819adb4811c9cbe87354bee15f893a5288f9db11d4ac96afddd436"} Jan 23 06:35:46 crc kubenswrapper[4937]: E0123 06:35:46.734504 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:47.234470166 +0000 UTC m=+147.038236819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.734643 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:46 crc kubenswrapper[4937]: E0123 06:35:46.736304 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:47.236292138 +0000 UTC m=+147.040058791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.755256 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" event={"ID":"d422a1a4-490a-4905-9093-fd39409c55b7","Type":"ContainerStarted","Data":"6f7bf694e0ef517f30860f6d0eac829ab8e4fbd1440b77155790c96a306029ce"} Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.767572 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dvjv9" event={"ID":"87779486-8ef2-4883-905c-efd17bbfd5ce","Type":"ContainerStarted","Data":"bda13e3d9e74d31592455792c3ea5fc90e73623a40318e15b22b6d7645e90718"} Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.776344 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks" event={"ID":"f106b4d2-ffb2-4cdb-bb19-3f107ac274af","Type":"ContainerStarted","Data":"5206fc9cc8a22d376426eaee791e60c65f82c3d517d6f71464ee4f13bde78015"} Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.797901 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2" event={"ID":"0e1bc8a7-4de4-4e7b-8771-c355e67a8279","Type":"ContainerStarted","Data":"e242a46b4339a394c1348a6940bf383504381e6b2dd7ae2f087be54b82ffbf17"} Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.802895 4937 generic.go:334] "Generic (PLEG): container finished" podID="ecae0951-0752-4c13-98ab-6fa8f4f86c33" containerID="f819f4315b10e5f76bf96cfb7c0ea4f0e81f6a3794753f525b4786a3f2b6bd89" exitCode=0 Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.802995 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v" event={"ID":"ecae0951-0752-4c13-98ab-6fa8f4f86c33","Type":"ContainerDied","Data":"f819f4315b10e5f76bf96cfb7c0ea4f0e81f6a3794753f525b4786a3f2b6bd89"} Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.837565 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:46 crc kubenswrapper[4937]: E0123 06:35:46.837807 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:47.337770492 +0000 UTC m=+147.141537145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.838205 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:46 crc kubenswrapper[4937]: E0123 06:35:46.838897 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:47.338882464 +0000 UTC m=+147.142649117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.866186 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x2zhr" event={"ID":"b43b4dab-66a6-47e9-bb22-5cc338833a5e","Type":"ContainerStarted","Data":"7bd48553506c02d0c5433ea2def995667a33b6cdd79b2b1d53ac071f4feb2755"} Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.866254 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x2zhr" event={"ID":"b43b4dab-66a6-47e9-bb22-5cc338833a5e","Type":"ContainerStarted","Data":"88b836cccbddc0e19d8ba71b896137cacf96b0de3418af7fe04619152eac7133"} Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.871788 4937 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xmf5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.872327 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" podUID="dda6ba11-61ab-4501-afa3-5bb654f352ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.871915 4937 patch_prober.go:28] interesting pod/downloads-7954f5f757-6g59k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.872400 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6g59k" podUID="57c396e4-f445-46c6-b636-cfe96d07cc43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.884784 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-b6qxt" Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.892181 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-vvxbh" podStartSLOduration=9.892160934 podStartE2EDuration="9.892160934s" podCreationTimestamp="2026-01-23 06:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:46.793136671 +0000 UTC m=+146.596903324" watchObservedRunningTime="2026-01-23 06:35:46.892160934 +0000 UTC m=+146.695927587" Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.893252 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" podStartSLOduration=125.893246285 podStartE2EDuration="2m5.893246285s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:46.889988632 +0000 UTC m=+146.693755295" watchObservedRunningTime="2026-01-23 06:35:46.893246285 +0000 UTC m=+146.697012938" Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.910784 4937 patch_prober.go:28] interesting pod/router-default-5444994796-62pqr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:35:46 crc kubenswrapper[4937]: [-]has-synced failed: reason withheld Jan 23 06:35:46 crc kubenswrapper[4937]: [+]process-running ok Jan 23 06:35:46 crc kubenswrapper[4937]: healthz check failed Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.910850 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62pqr" podUID="4059e769-c52c-44c3-88c0-35ab870cedf8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.925498 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vd4ks" podStartSLOduration=125.925476041 podStartE2EDuration="2m5.925476041s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:46.92230202 +0000 UTC m=+146.726068673" watchObservedRunningTime="2026-01-23 06:35:46.925476041 +0000 UTC m=+146.729242694" Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.943974 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-vn8kx" Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.944944 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:46 crc kubenswrapper[4937]: E0123 06:35:46.945444 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:47.445423983 +0000 UTC m=+147.249190636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:46 crc kubenswrapper[4937]: I0123 06:35:46.972120 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-dvjv9" podStartSLOduration=125.972100609 podStartE2EDuration="2m5.972100609s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:46.970400601 +0000 UTC m=+146.774167264" watchObservedRunningTime="2026-01-23 06:35:46.972100609 +0000 UTC m=+146.775867262" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.041213 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2k8t2" podStartSLOduration=126.041194964 podStartE2EDuration="2m6.041194964s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:47.040165274 +0000 UTC m=+146.843931937" watchObservedRunningTime="2026-01-23 06:35:47.041194964 +0000 UTC m=+146.844961617" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.041315 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" podStartSLOduration=126.041311347 podStartE2EDuration="2m6.041311347s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:47.00938111 +0000 UTC m=+146.813147763" watchObservedRunningTime="2026-01-23 06:35:47.041311347 +0000 UTC m=+146.845078000" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.052804 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:47 crc kubenswrapper[4937]: E0123 06:35:47.064944 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:47.564925225 +0000 UTC m=+147.368691878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.069849 4937 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.119011 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-23 06:30:46 +0000 UTC, rotation deadline is 2026-11-30 14:43:29.772624724 +0000 UTC Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.119062 4937 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7472h7m42.653565375s for next certificate rotation Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.154124 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:47 crc kubenswrapper[4937]: E0123 06:35:47.154543 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:47.654526748 +0000 UTC m=+147.458293401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.167239 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x2zhr" podStartSLOduration=126.167220203 podStartE2EDuration="2m6.167220203s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:47.166023958 +0000 UTC m=+146.969790611" watchObservedRunningTime="2026-01-23 06:35:47.167220203 +0000 UTC m=+146.970986856" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.255836 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:47 crc kubenswrapper[4937]: E0123 06:35:47.256232 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:47.756217878 +0000 UTC m=+147.559984521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.317142 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hplkr" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.358925 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:47 crc kubenswrapper[4937]: E0123 06:35:47.359344 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:47.859317709 +0000 UTC m=+147.663084362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.461339 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:47 crc kubenswrapper[4937]: E0123 06:35:47.461848 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:47.961832262 +0000 UTC m=+147.765598915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.538455 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-742j8"] Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.539396 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-742j8" Jan 23 06:35:47 crc kubenswrapper[4937]: W0123 06:35:47.541276 4937 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: secrets "certified-operators-dockercfg-4rs5g" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 23 06:35:47 crc kubenswrapper[4937]: E0123 06:35:47.541325 4937 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-4rs5g\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.562441 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:47 crc kubenswrapper[4937]: E0123 06:35:47.562737 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:48.062695338 +0000 UTC m=+147.866461991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.563179 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:47 crc kubenswrapper[4937]: E0123 06:35:47.564046 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 06:35:48.063587204 +0000 UTC m=+147.867353857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lsjwm" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.564399 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-742j8"] Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.664708 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.664937 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhj2r\" (UniqueName: \"kubernetes.io/projected/a5003ecc-816a-494a-b773-0edb552a55f3-kube-api-access-jhj2r\") pod \"certified-operators-742j8\" (UID: \"a5003ecc-816a-494a-b773-0edb552a55f3\") " pod="openshift-marketplace/certified-operators-742j8" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.664989 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5003ecc-816a-494a-b773-0edb552a55f3-utilities\") pod \"certified-operators-742j8\" (UID: \"a5003ecc-816a-494a-b773-0edb552a55f3\") " pod="openshift-marketplace/certified-operators-742j8" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.665066 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5003ecc-816a-494a-b773-0edb552a55f3-catalog-content\") pod \"certified-operators-742j8\" (UID: \"a5003ecc-816a-494a-b773-0edb552a55f3\") " pod="openshift-marketplace/certified-operators-742j8" Jan 23 06:35:47 crc kubenswrapper[4937]: E0123 06:35:47.665392 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 06:35:48.165191631 +0000 UTC m=+147.968958284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.729812 4937 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-23T06:35:47.070136094Z","Handler":null,"Name":""} Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.738318 4937 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.738505 4937 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.767314 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qk5qp"] Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.769184 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5003ecc-816a-494a-b773-0edb552a55f3-catalog-content\") pod \"certified-operators-742j8\" (UID: \"a5003ecc-816a-494a-b773-0edb552a55f3\") " pod="openshift-marketplace/certified-operators-742j8" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.769396 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhj2r\" (UniqueName: \"kubernetes.io/projected/a5003ecc-816a-494a-b773-0edb552a55f3-kube-api-access-jhj2r\") pod \"certified-operators-742j8\" (UID: \"a5003ecc-816a-494a-b773-0edb552a55f3\") " pod="openshift-marketplace/certified-operators-742j8" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.774030 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5003ecc-816a-494a-b773-0edb552a55f3-utilities\") pod \"certified-operators-742j8\" (UID: \"a5003ecc-816a-494a-b773-0edb552a55f3\") " pod="openshift-marketplace/certified-operators-742j8" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.774258 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.775335 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5003ecc-816a-494a-b773-0edb552a55f3-utilities\") pod \"certified-operators-742j8\" (UID: \"a5003ecc-816a-494a-b773-0edb552a55f3\") " pod="openshift-marketplace/certified-operators-742j8" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.775638 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5003ecc-816a-494a-b773-0edb552a55f3-catalog-content\") pod \"certified-operators-742j8\" (UID: \"a5003ecc-816a-494a-b773-0edb552a55f3\") " pod="openshift-marketplace/certified-operators-742j8" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.776184 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qk5qp" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.777900 4937 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.777932 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.780671 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.787107 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qk5qp"] Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.797606 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhj2r\" (UniqueName: \"kubernetes.io/projected/a5003ecc-816a-494a-b773-0edb552a55f3-kube-api-access-jhj2r\") pod \"certified-operators-742j8\" (UID: \"a5003ecc-816a-494a-b773-0edb552a55f3\") " pod="openshift-marketplace/certified-operators-742j8" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.821442 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lsjwm\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.879477 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.879846 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c5c12d-911e-4ba4-b4ac-2121b26efdcb-utilities\") pod \"community-operators-qk5qp\" (UID: \"89c5c12d-911e-4ba4-b4ac-2121b26efdcb\") " pod="openshift-marketplace/community-operators-qk5qp" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.879881 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c5c12d-911e-4ba4-b4ac-2121b26efdcb-catalog-content\") pod \"community-operators-qk5qp\" (UID: \"89c5c12d-911e-4ba4-b4ac-2121b26efdcb\") " pod="openshift-marketplace/community-operators-qk5qp" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.879967 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4wkj\" (UniqueName: \"kubernetes.io/projected/89c5c12d-911e-4ba4-b4ac-2121b26efdcb-kube-api-access-d4wkj\") pod \"community-operators-qk5qp\" (UID: \"89c5c12d-911e-4ba4-b4ac-2121b26efdcb\") " pod="openshift-marketplace/community-operators-qk5qp" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.888947 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" event={"ID":"ecbb881d-4369-4366-a3ae-e6520b39ef2a","Type":"ContainerStarted","Data":"7b24ab42d1fc7238a83dd410d49930487dcc47382090b297f76f392e755f44ee"} Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.889138 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" event={"ID":"ecbb881d-4369-4366-a3ae-e6520b39ef2a","Type":"ContainerStarted","Data":"924fa1970ca94605e3a78b2e334d3645c660d28b43ad39ba3ab9862f006cc878"} Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.890557 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.894617 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.899460 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2dl4" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.901765 4937 patch_prober.go:28] interesting pod/router-default-5444994796-62pqr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:35:47 crc kubenswrapper[4937]: [-]has-synced failed: reason withheld Jan 23 06:35:47 crc kubenswrapper[4937]: [+]process-running ok Jan 23 06:35:47 crc kubenswrapper[4937]: healthz check failed Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.901805 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62pqr" podUID="4059e769-c52c-44c3-88c0-35ab870cedf8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.952340 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-8d4jd" podStartSLOduration=11.952259654 podStartE2EDuration="11.952259654s" podCreationTimestamp="2026-01-23 06:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:47.951138903 +0000 UTC m=+147.754905576" watchObservedRunningTime="2026-01-23 06:35:47.952259654 +0000 UTC m=+147.756026307" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.971959 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n5jg8"] Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.973008 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n5jg8" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.983904 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c5c12d-911e-4ba4-b4ac-2121b26efdcb-utilities\") pod \"community-operators-qk5qp\" (UID: \"89c5c12d-911e-4ba4-b4ac-2121b26efdcb\") " pod="openshift-marketplace/community-operators-qk5qp" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.984681 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c5c12d-911e-4ba4-b4ac-2121b26efdcb-utilities\") pod \"community-operators-qk5qp\" (UID: \"89c5c12d-911e-4ba4-b4ac-2121b26efdcb\") " pod="openshift-marketplace/community-operators-qk5qp" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.987299 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c5c12d-911e-4ba4-b4ac-2121b26efdcb-catalog-content\") pod \"community-operators-qk5qp\" (UID: \"89c5c12d-911e-4ba4-b4ac-2121b26efdcb\") " pod="openshift-marketplace/community-operators-qk5qp" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.987762 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c5c12d-911e-4ba4-b4ac-2121b26efdcb-catalog-content\") pod \"community-operators-qk5qp\" (UID: \"89c5c12d-911e-4ba4-b4ac-2121b26efdcb\") " pod="openshift-marketplace/community-operators-qk5qp" Jan 23 06:35:47 crc kubenswrapper[4937]: I0123 06:35:47.988324 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4wkj\" (UniqueName: \"kubernetes.io/projected/89c5c12d-911e-4ba4-b4ac-2121b26efdcb-kube-api-access-d4wkj\") pod \"community-operators-qk5qp\" (UID: \"89c5c12d-911e-4ba4-b4ac-2121b26efdcb\") " pod="openshift-marketplace/community-operators-qk5qp" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.001544 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n5jg8"] Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.009893 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.069012 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4wkj\" (UniqueName: \"kubernetes.io/projected/89c5c12d-911e-4ba4-b4ac-2121b26efdcb-kube-api-access-d4wkj\") pod \"community-operators-qk5qp\" (UID: \"89c5c12d-911e-4ba4-b4ac-2121b26efdcb\") " pod="openshift-marketplace/community-operators-qk5qp" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.090717 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3-catalog-content\") pod \"certified-operators-n5jg8\" (UID: \"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3\") " pod="openshift-marketplace/certified-operators-n5jg8" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.090777 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrmxc\" (UniqueName: \"kubernetes.io/projected/82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3-kube-api-access-jrmxc\") pod \"certified-operators-n5jg8\" (UID: \"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3\") " pod="openshift-marketplace/certified-operators-n5jg8" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.090820 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3-utilities\") pod \"certified-operators-n5jg8\" (UID: \"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3\") " pod="openshift-marketplace/certified-operators-n5jg8" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.128304 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qk5qp" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.140743 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vmr8l"] Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.141728 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmr8l" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.149859 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vmr8l"] Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.192877 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.193961 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3-catalog-content\") pod \"certified-operators-n5jg8\" (UID: \"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3\") " pod="openshift-marketplace/certified-operators-n5jg8" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.194002 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrmxc\" (UniqueName: \"kubernetes.io/projected/82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3-kube-api-access-jrmxc\") pod \"certified-operators-n5jg8\" (UID: \"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3\") " pod="openshift-marketplace/certified-operators-n5jg8" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.194043 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3-utilities\") pod \"certified-operators-n5jg8\" (UID: \"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3\") " pod="openshift-marketplace/certified-operators-n5jg8" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.194084 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.194172 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.196376 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3-catalog-content\") pod \"certified-operators-n5jg8\" (UID: \"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3\") " pod="openshift-marketplace/certified-operators-n5jg8" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.201940 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3-utilities\") pod \"certified-operators-n5jg8\" (UID: \"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3\") " pod="openshift-marketplace/certified-operators-n5jg8" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.203270 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.213086 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrmxc\" (UniqueName: \"kubernetes.io/projected/82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3-kube-api-access-jrmxc\") pod \"certified-operators-n5jg8\" (UID: \"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3\") " pod="openshift-marketplace/certified-operators-n5jg8" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.256370 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.298867 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.298914 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/177bc81c-0876-451a-8d35-76d17ae3259a-utilities\") pod \"community-operators-vmr8l\" (UID: \"177bc81c-0876-451a-8d35-76d17ae3259a\") " pod="openshift-marketplace/community-operators-vmr8l" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.298939 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/177bc81c-0876-451a-8d35-76d17ae3259a-catalog-content\") pod \"community-operators-vmr8l\" (UID: \"177bc81c-0876-451a-8d35-76d17ae3259a\") " pod="openshift-marketplace/community-operators-vmr8l" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.298973 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.299038 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhtqm\" (UniqueName: \"kubernetes.io/projected/177bc81c-0876-451a-8d35-76d17ae3259a-kube-api-access-nhtqm\") pod \"community-operators-vmr8l\" (UID: \"177bc81c-0876-451a-8d35-76d17ae3259a\") " pod="openshift-marketplace/community-operators-vmr8l" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.309943 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.310647 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.311738 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.346500 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lsjwm"] Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.399600 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecae0951-0752-4c13-98ab-6fa8f4f86c33-config-volume\") pod \"ecae0951-0752-4c13-98ab-6fa8f4f86c33\" (UID: \"ecae0951-0752-4c13-98ab-6fa8f4f86c33\") " Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.399727 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm77v\" (UniqueName: \"kubernetes.io/projected/ecae0951-0752-4c13-98ab-6fa8f4f86c33-kube-api-access-jm77v\") pod \"ecae0951-0752-4c13-98ab-6fa8f4f86c33\" (UID: \"ecae0951-0752-4c13-98ab-6fa8f4f86c33\") " Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.399776 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecae0951-0752-4c13-98ab-6fa8f4f86c33-secret-volume\") pod \"ecae0951-0752-4c13-98ab-6fa8f4f86c33\" (UID: \"ecae0951-0752-4c13-98ab-6fa8f4f86c33\") " Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.399990 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/177bc81c-0876-451a-8d35-76d17ae3259a-utilities\") pod \"community-operators-vmr8l\" (UID: \"177bc81c-0876-451a-8d35-76d17ae3259a\") " pod="openshift-marketplace/community-operators-vmr8l" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.400009 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/177bc81c-0876-451a-8d35-76d17ae3259a-catalog-content\") pod \"community-operators-vmr8l\" (UID: \"177bc81c-0876-451a-8d35-76d17ae3259a\") " pod="openshift-marketplace/community-operators-vmr8l" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.400069 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhtqm\" (UniqueName: \"kubernetes.io/projected/177bc81c-0876-451a-8d35-76d17ae3259a-kube-api-access-nhtqm\") pod \"community-operators-vmr8l\" (UID: \"177bc81c-0876-451a-8d35-76d17ae3259a\") " pod="openshift-marketplace/community-operators-vmr8l" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.400989 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecae0951-0752-4c13-98ab-6fa8f4f86c33-config-volume" (OuterVolumeSpecName: "config-volume") pod "ecae0951-0752-4c13-98ab-6fa8f4f86c33" (UID: "ecae0951-0752-4c13-98ab-6fa8f4f86c33"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.401788 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/177bc81c-0876-451a-8d35-76d17ae3259a-utilities\") pod \"community-operators-vmr8l\" (UID: \"177bc81c-0876-451a-8d35-76d17ae3259a\") " pod="openshift-marketplace/community-operators-vmr8l" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.401900 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/177bc81c-0876-451a-8d35-76d17ae3259a-catalog-content\") pod \"community-operators-vmr8l\" (UID: \"177bc81c-0876-451a-8d35-76d17ae3259a\") " pod="openshift-marketplace/community-operators-vmr8l" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.417102 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecae0951-0752-4c13-98ab-6fa8f4f86c33-kube-api-access-jm77v" (OuterVolumeSpecName: "kube-api-access-jm77v") pod "ecae0951-0752-4c13-98ab-6fa8f4f86c33" (UID: "ecae0951-0752-4c13-98ab-6fa8f4f86c33"). InnerVolumeSpecName "kube-api-access-jm77v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.419618 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecae0951-0752-4c13-98ab-6fa8f4f86c33-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ecae0951-0752-4c13-98ab-6fa8f4f86c33" (UID: "ecae0951-0752-4c13-98ab-6fa8f4f86c33"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.431921 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhtqm\" (UniqueName: \"kubernetes.io/projected/177bc81c-0876-451a-8d35-76d17ae3259a-kube-api-access-nhtqm\") pod \"community-operators-vmr8l\" (UID: \"177bc81c-0876-451a-8d35-76d17ae3259a\") " pod="openshift-marketplace/community-operators-vmr8l" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.478955 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.483979 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n5jg8" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.485239 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmr8l" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.486500 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-742j8" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.501814 4937 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecae0951-0752-4c13-98ab-6fa8f4f86c33-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.501910 4937 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecae0951-0752-4c13-98ab-6fa8f4f86c33-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.501921 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm77v\" (UniqueName: \"kubernetes.io/projected/ecae0951-0752-4c13-98ab-6fa8f4f86c33-kube-api-access-jm77v\") on node \"crc\" DevicePath \"\"" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.537449 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.573750 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.585255 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 06:35:48 crc kubenswrapper[4937]: W0123 06:35:48.611477 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-46769eada3d9f263d17aa066f6013f88eeed43e9592d19fa98c2266f2f6db3dd WatchSource:0}: Error finding container 46769eada3d9f263d17aa066f6013f88eeed43e9592d19fa98c2266f2f6db3dd: Status 404 returned error can't find the container with id 46769eada3d9f263d17aa066f6013f88eeed43e9592d19fa98c2266f2f6db3dd Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.638338 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qk5qp"] Jan 23 06:35:48 crc kubenswrapper[4937]: W0123 06:35:48.654629 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89c5c12d_911e_4ba4_b4ac_2121b26efdcb.slice/crio-a51ed2988d63eda175b46a8f81a4ff9046b8a26e50f3f85a3e449c76509dd6ab WatchSource:0}: Error finding container a51ed2988d63eda175b46a8f81a4ff9046b8a26e50f3f85a3e449c76509dd6ab: Status 404 returned error can't find the container with id a51ed2988d63eda175b46a8f81a4ff9046b8a26e50f3f85a3e449c76509dd6ab Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.909133 4937 patch_prober.go:28] interesting pod/router-default-5444994796-62pqr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:35:48 crc kubenswrapper[4937]: [-]has-synced failed: reason withheld Jan 23 06:35:48 crc kubenswrapper[4937]: [+]process-running ok Jan 23 06:35:48 crc kubenswrapper[4937]: healthz check failed Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.909565 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62pqr" podUID="4059e769-c52c-44c3-88c0-35ab870cedf8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.934752 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" event={"ID":"aaaebcc7-b79a-4068-be05-1e4c1808e6b4","Type":"ContainerStarted","Data":"035518516f143966f79e1be95c74a33050d4432efbb10202a8151302f51bb96e"} Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.934789 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" event={"ID":"aaaebcc7-b79a-4068-be05-1e4c1808e6b4","Type":"ContainerStarted","Data":"7d3b4a99b92ec9ac8e011e5f11115e85ff0b395578e0b2a3fdb2b2938bd444ae"} Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.934827 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.955375 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"46769eada3d9f263d17aa066f6013f88eeed43e9592d19fa98c2266f2f6db3dd"} Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.960413 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" podStartSLOduration=127.960391553 podStartE2EDuration="2m7.960391553s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:48.955641387 +0000 UTC m=+148.759408040" watchObservedRunningTime="2026-01-23 06:35:48.960391553 +0000 UTC m=+148.764158206" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.978405 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v" event={"ID":"ecae0951-0752-4c13-98ab-6fa8f4f86c33","Type":"ContainerDied","Data":"e5c2f73f84f25619dc53d78b85e155633f2280abd574eaa29b22af02b7fa2e55"} Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.978456 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5c2f73f84f25619dc53d78b85e155633f2280abd574eaa29b22af02b7fa2e55" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.978519 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v" Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.985367 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qk5qp" event={"ID":"89c5c12d-911e-4ba4-b4ac-2121b26efdcb","Type":"ContainerStarted","Data":"a51ed2988d63eda175b46a8f81a4ff9046b8a26e50f3f85a3e449c76509dd6ab"} Jan 23 06:35:48 crc kubenswrapper[4937]: I0123 06:35:48.988770 4937 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.107899 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.107960 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.119292 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.156959 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n5jg8"] Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.180199 4937 patch_prober.go:28] interesting pod/downloads-7954f5f757-6g59k container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.180677 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6g59k" podUID="57c396e4-f445-46c6-b636-cfe96d07cc43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.180477 4937 patch_prober.go:28] interesting pod/downloads-7954f5f757-6g59k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.180914 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6g59k" podUID="57c396e4-f445-46c6-b636-cfe96d07cc43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 23 06:35:49 crc kubenswrapper[4937]: W0123 06:35:49.181114 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82b0e24e_d09c_45a4_95f3_d7eb89a8dfe3.slice/crio-ef609b936c8bf00e7e7ee7f5e03cfd7e0410cc6e567ac1f3237426be5b945eb0 WatchSource:0}: Error finding container ef609b936c8bf00e7e7ee7f5e03cfd7e0410cc6e567ac1f3237426be5b945eb0: Status 404 returned error can't find the container with id ef609b936c8bf00e7e7ee7f5e03cfd7e0410cc6e567ac1f3237426be5b945eb0 Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.218733 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.220046 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.233889 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.242615 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.242661 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.244980 4937 patch_prober.go:28] interesting pod/console-f9d7485db-6n269 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.245032 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6n269" podUID="a88c49d5-e615-4c41-972e-3a0ddcadfd53" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.281076 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-742j8"] Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.305071 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vmr8l"] Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.747877 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 06:35:49 crc kubenswrapper[4937]: E0123 06:35:49.748174 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecae0951-0752-4c13-98ab-6fa8f4f86c33" containerName="collect-profiles" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.748187 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecae0951-0752-4c13-98ab-6fa8f4f86c33" containerName="collect-profiles" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.748339 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecae0951-0752-4c13-98ab-6fa8f4f86c33" containerName="collect-profiles" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.748834 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.751563 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.751568 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.752423 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hs5k4"] Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.761775 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hs5k4" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.772034 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hs5k4"] Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.772358 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.775307 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.867389 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e415690c-f592-4007-9bc2-e70f789e688a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e415690c-f592-4007-9bc2-e70f789e688a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.867494 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhjxn\" (UniqueName: \"kubernetes.io/projected/ee63b0c9-aed6-4be4-9987-c27fe197911d-kube-api-access-bhjxn\") pod \"redhat-marketplace-hs5k4\" (UID: \"ee63b0c9-aed6-4be4-9987-c27fe197911d\") " pod="openshift-marketplace/redhat-marketplace-hs5k4" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.867546 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee63b0c9-aed6-4be4-9987-c27fe197911d-utilities\") pod \"redhat-marketplace-hs5k4\" (UID: \"ee63b0c9-aed6-4be4-9987-c27fe197911d\") " pod="openshift-marketplace/redhat-marketplace-hs5k4" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.867566 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e415690c-f592-4007-9bc2-e70f789e688a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e415690c-f592-4007-9bc2-e70f789e688a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.867737 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee63b0c9-aed6-4be4-9987-c27fe197911d-catalog-content\") pod \"redhat-marketplace-hs5k4\" (UID: \"ee63b0c9-aed6-4be4-9987-c27fe197911d\") " pod="openshift-marketplace/redhat-marketplace-hs5k4" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.898896 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.902264 4937 patch_prober.go:28] interesting pod/router-default-5444994796-62pqr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:35:49 crc kubenswrapper[4937]: [-]has-synced failed: reason withheld Jan 23 06:35:49 crc kubenswrapper[4937]: [+]process-running ok Jan 23 06:35:49 crc kubenswrapper[4937]: healthz check failed Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.902331 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62pqr" podUID="4059e769-c52c-44c3-88c0-35ab870cedf8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.969765 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhjxn\" (UniqueName: \"kubernetes.io/projected/ee63b0c9-aed6-4be4-9987-c27fe197911d-kube-api-access-bhjxn\") pod \"redhat-marketplace-hs5k4\" (UID: \"ee63b0c9-aed6-4be4-9987-c27fe197911d\") " pod="openshift-marketplace/redhat-marketplace-hs5k4" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.969859 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee63b0c9-aed6-4be4-9987-c27fe197911d-utilities\") pod \"redhat-marketplace-hs5k4\" (UID: \"ee63b0c9-aed6-4be4-9987-c27fe197911d\") " pod="openshift-marketplace/redhat-marketplace-hs5k4" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.969894 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e415690c-f592-4007-9bc2-e70f789e688a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e415690c-f592-4007-9bc2-e70f789e688a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.969932 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee63b0c9-aed6-4be4-9987-c27fe197911d-catalog-content\") pod \"redhat-marketplace-hs5k4\" (UID: \"ee63b0c9-aed6-4be4-9987-c27fe197911d\") " pod="openshift-marketplace/redhat-marketplace-hs5k4" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.969992 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e415690c-f592-4007-9bc2-e70f789e688a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e415690c-f592-4007-9bc2-e70f789e688a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.970104 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e415690c-f592-4007-9bc2-e70f789e688a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e415690c-f592-4007-9bc2-e70f789e688a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.970549 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee63b0c9-aed6-4be4-9987-c27fe197911d-utilities\") pod \"redhat-marketplace-hs5k4\" (UID: \"ee63b0c9-aed6-4be4-9987-c27fe197911d\") " pod="openshift-marketplace/redhat-marketplace-hs5k4" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.970730 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee63b0c9-aed6-4be4-9987-c27fe197911d-catalog-content\") pod \"redhat-marketplace-hs5k4\" (UID: \"ee63b0c9-aed6-4be4-9987-c27fe197911d\") " pod="openshift-marketplace/redhat-marketplace-hs5k4" Jan 23 06:35:49 crc kubenswrapper[4937]: I0123 06:35:49.990684 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e415690c-f592-4007-9bc2-e70f789e688a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e415690c-f592-4007-9bc2-e70f789e688a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.003954 4937 generic.go:334] "Generic (PLEG): container finished" podID="89c5c12d-911e-4ba4-b4ac-2121b26efdcb" containerID="e0cebbed00ea87158a75cbf3bd36e8779a983dd303b4bf837cbf950c86202db4" exitCode=0 Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.004034 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qk5qp" event={"ID":"89c5c12d-911e-4ba4-b4ac-2121b26efdcb","Type":"ContainerDied","Data":"e0cebbed00ea87158a75cbf3bd36e8779a983dd303b4bf837cbf950c86202db4"} Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.006080 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9c65733f774f67b331fb0f38d773415de238f6ee14a4efe31e21c7d3531e0f4b"} Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.006112 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c788dc864a3c001b2fef5aeea2ad8557b5a5c03448449c0e51914dac8a90c0f1"} Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.006541 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.008169 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"81cfc03f1c8043dafe0d70ba5a5af2a755295ac31518c5406f6411221203824e"} Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.008195 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"45f9049a309d73ebec60d7f0654bfac8848d3a437946d418a09073bdceae7be0"} Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.016081 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhjxn\" (UniqueName: \"kubernetes.io/projected/ee63b0c9-aed6-4be4-9987-c27fe197911d-kube-api-access-bhjxn\") pod \"redhat-marketplace-hs5k4\" (UID: \"ee63b0c9-aed6-4be4-9987-c27fe197911d\") " pod="openshift-marketplace/redhat-marketplace-hs5k4" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.034853 4937 generic.go:334] "Generic (PLEG): container finished" podID="177bc81c-0876-451a-8d35-76d17ae3259a" containerID="272a6e43065fdcac90c7c67b8c07a439e8b97b1e6e16f8daad48196be9446a0e" exitCode=0 Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.034958 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmr8l" event={"ID":"177bc81c-0876-451a-8d35-76d17ae3259a","Type":"ContainerDied","Data":"272a6e43065fdcac90c7c67b8c07a439e8b97b1e6e16f8daad48196be9446a0e"} Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.035492 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmr8l" event={"ID":"177bc81c-0876-451a-8d35-76d17ae3259a","Type":"ContainerStarted","Data":"9778491a4f0ef56dda6fa766ea472b7220dfd0ad0dbc51df73a5d5a4783f4e53"} Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.039131 4937 generic.go:334] "Generic (PLEG): container finished" podID="a5003ecc-816a-494a-b773-0edb552a55f3" containerID="eb1cfd461d8a338437731c591913702089603ff2558903a595982a8fef94bd6b" exitCode=0 Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.039207 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-742j8" event={"ID":"a5003ecc-816a-494a-b773-0edb552a55f3","Type":"ContainerDied","Data":"eb1cfd461d8a338437731c591913702089603ff2558903a595982a8fef94bd6b"} Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.039258 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-742j8" event={"ID":"a5003ecc-816a-494a-b773-0edb552a55f3","Type":"ContainerStarted","Data":"27b7f807573b67303206c4899f492ab78ad73413720369ca3200fe4ac1d41760"} Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.045505 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e4c7cfd692df693c992c9e93c56fb89043a94da5a61629764b57f53114e1da37"} Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.048336 4937 generic.go:334] "Generic (PLEG): container finished" podID="82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3" containerID="d77f9c4766c01712fadbd8656d08a7eb571906da4075cc3fa57403f74908c0bd" exitCode=0 Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.049694 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n5jg8" event={"ID":"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3","Type":"ContainerDied","Data":"d77f9c4766c01712fadbd8656d08a7eb571906da4075cc3fa57403f74908c0bd"} Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.049740 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n5jg8" event={"ID":"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3","Type":"ContainerStarted","Data":"ef609b936c8bf00e7e7ee7f5e03cfd7e0410cc6e567ac1f3237426be5b945eb0"} Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.060682 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-g8lnx" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.066714 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-c9fhv" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.195920 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hs5k4" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.198656 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.209496 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rg2gr"] Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.211315 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rg2gr" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.218775 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rg2gr"] Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.301196 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h6cp\" (UniqueName: \"kubernetes.io/projected/ea9ca5e0-3d00-45b5-b52d-74acee0081c9-kube-api-access-8h6cp\") pod \"redhat-marketplace-rg2gr\" (UID: \"ea9ca5e0-3d00-45b5-b52d-74acee0081c9\") " pod="openshift-marketplace/redhat-marketplace-rg2gr" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.301766 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9ca5e0-3d00-45b5-b52d-74acee0081c9-utilities\") pod \"redhat-marketplace-rg2gr\" (UID: \"ea9ca5e0-3d00-45b5-b52d-74acee0081c9\") " pod="openshift-marketplace/redhat-marketplace-rg2gr" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.301815 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9ca5e0-3d00-45b5-b52d-74acee0081c9-catalog-content\") pod \"redhat-marketplace-rg2gr\" (UID: \"ea9ca5e0-3d00-45b5-b52d-74acee0081c9\") " pod="openshift-marketplace/redhat-marketplace-rg2gr" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.403133 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9ca5e0-3d00-45b5-b52d-74acee0081c9-utilities\") pod \"redhat-marketplace-rg2gr\" (UID: \"ea9ca5e0-3d00-45b5-b52d-74acee0081c9\") " pod="openshift-marketplace/redhat-marketplace-rg2gr" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.403207 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9ca5e0-3d00-45b5-b52d-74acee0081c9-catalog-content\") pod \"redhat-marketplace-rg2gr\" (UID: \"ea9ca5e0-3d00-45b5-b52d-74acee0081c9\") " pod="openshift-marketplace/redhat-marketplace-rg2gr" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.403263 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h6cp\" (UniqueName: \"kubernetes.io/projected/ea9ca5e0-3d00-45b5-b52d-74acee0081c9-kube-api-access-8h6cp\") pod \"redhat-marketplace-rg2gr\" (UID: \"ea9ca5e0-3d00-45b5-b52d-74acee0081c9\") " pod="openshift-marketplace/redhat-marketplace-rg2gr" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.404094 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9ca5e0-3d00-45b5-b52d-74acee0081c9-utilities\") pod \"redhat-marketplace-rg2gr\" (UID: \"ea9ca5e0-3d00-45b5-b52d-74acee0081c9\") " pod="openshift-marketplace/redhat-marketplace-rg2gr" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.404312 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9ca5e0-3d00-45b5-b52d-74acee0081c9-catalog-content\") pod \"redhat-marketplace-rg2gr\" (UID: \"ea9ca5e0-3d00-45b5-b52d-74acee0081c9\") " pod="openshift-marketplace/redhat-marketplace-rg2gr" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.442542 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h6cp\" (UniqueName: \"kubernetes.io/projected/ea9ca5e0-3d00-45b5-b52d-74acee0081c9-kube-api-access-8h6cp\") pod \"redhat-marketplace-rg2gr\" (UID: \"ea9ca5e0-3d00-45b5-b52d-74acee0081c9\") " pod="openshift-marketplace/redhat-marketplace-rg2gr" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.607047 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rg2gr" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.752641 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t9rt4"] Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.753931 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9rt4" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.759277 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.790572 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t9rt4"] Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.793077 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.909121 4937 patch_prober.go:28] interesting pod/router-default-5444994796-62pqr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:35:50 crc kubenswrapper[4937]: [-]has-synced failed: reason withheld Jan 23 06:35:50 crc kubenswrapper[4937]: [+]process-running ok Jan 23 06:35:50 crc kubenswrapper[4937]: healthz check failed Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.909180 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62pqr" podUID="4059e769-c52c-44c3-88c0-35ab870cedf8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.936413 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8f13571-ed06-4bcc-825f-8bc5915ab5a7-utilities\") pod \"redhat-operators-t9rt4\" (UID: \"f8f13571-ed06-4bcc-825f-8bc5915ab5a7\") " pod="openshift-marketplace/redhat-operators-t9rt4" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.936502 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgfdd\" (UniqueName: \"kubernetes.io/projected/f8f13571-ed06-4bcc-825f-8bc5915ab5a7-kube-api-access-zgfdd\") pod \"redhat-operators-t9rt4\" (UID: \"f8f13571-ed06-4bcc-825f-8bc5915ab5a7\") " pod="openshift-marketplace/redhat-operators-t9rt4" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.936537 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8f13571-ed06-4bcc-825f-8bc5915ab5a7-catalog-content\") pod \"redhat-operators-t9rt4\" (UID: \"f8f13571-ed06-4bcc-825f-8bc5915ab5a7\") " pod="openshift-marketplace/redhat-operators-t9rt4" Jan 23 06:35:50 crc kubenswrapper[4937]: I0123 06:35:50.951816 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hs5k4"] Jan 23 06:35:50 crc kubenswrapper[4937]: W0123 06:35:50.971868 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee63b0c9_aed6_4be4_9987_c27fe197911d.slice/crio-6d990283e0ab5fc53f62754d3c38307a2b0fee00abdd4175045d62beafb6f58d WatchSource:0}: Error finding container 6d990283e0ab5fc53f62754d3c38307a2b0fee00abdd4175045d62beafb6f58d: Status 404 returned error can't find the container with id 6d990283e0ab5fc53f62754d3c38307a2b0fee00abdd4175045d62beafb6f58d Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.011050 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rg2gr"] Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.037533 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8f13571-ed06-4bcc-825f-8bc5915ab5a7-catalog-content\") pod \"redhat-operators-t9rt4\" (UID: \"f8f13571-ed06-4bcc-825f-8bc5915ab5a7\") " pod="openshift-marketplace/redhat-operators-t9rt4" Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.037641 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8f13571-ed06-4bcc-825f-8bc5915ab5a7-utilities\") pod \"redhat-operators-t9rt4\" (UID: \"f8f13571-ed06-4bcc-825f-8bc5915ab5a7\") " pod="openshift-marketplace/redhat-operators-t9rt4" Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.037701 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgfdd\" (UniqueName: \"kubernetes.io/projected/f8f13571-ed06-4bcc-825f-8bc5915ab5a7-kube-api-access-zgfdd\") pod \"redhat-operators-t9rt4\" (UID: \"f8f13571-ed06-4bcc-825f-8bc5915ab5a7\") " pod="openshift-marketplace/redhat-operators-t9rt4" Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.038523 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8f13571-ed06-4bcc-825f-8bc5915ab5a7-catalog-content\") pod \"redhat-operators-t9rt4\" (UID: \"f8f13571-ed06-4bcc-825f-8bc5915ab5a7\") " pod="openshift-marketplace/redhat-operators-t9rt4" Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.038799 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8f13571-ed06-4bcc-825f-8bc5915ab5a7-utilities\") pod \"redhat-operators-t9rt4\" (UID: \"f8f13571-ed06-4bcc-825f-8bc5915ab5a7\") " pod="openshift-marketplace/redhat-operators-t9rt4" Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.075719 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgfdd\" (UniqueName: \"kubernetes.io/projected/f8f13571-ed06-4bcc-825f-8bc5915ab5a7-kube-api-access-zgfdd\") pod \"redhat-operators-t9rt4\" (UID: \"f8f13571-ed06-4bcc-825f-8bc5915ab5a7\") " pod="openshift-marketplace/redhat-operators-t9rt4" Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.093843 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hs5k4" event={"ID":"ee63b0c9-aed6-4be4-9987-c27fe197911d","Type":"ContainerStarted","Data":"6d990283e0ab5fc53f62754d3c38307a2b0fee00abdd4175045d62beafb6f58d"} Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.095319 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9rt4" Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.100932 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rg2gr" event={"ID":"ea9ca5e0-3d00-45b5-b52d-74acee0081c9","Type":"ContainerStarted","Data":"e715b5d7d3939d9835a72bdddfc7358eddbae8447d647f5f69008bd27a656b04"} Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.110885 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e415690c-f592-4007-9bc2-e70f789e688a","Type":"ContainerStarted","Data":"8378344ed0f6da86e7bcd346a165dfb5a1051f9ea0570697fdc54b60f7b42067"} Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.140616 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ppjrq"] Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.142028 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ppjrq" Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.163827 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ppjrq"] Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.343069 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n8cj\" (UniqueName: \"kubernetes.io/projected/df8602f3-84a0-42cf-99ac-fe547e004f25-kube-api-access-9n8cj\") pod \"redhat-operators-ppjrq\" (UID: \"df8602f3-84a0-42cf-99ac-fe547e004f25\") " pod="openshift-marketplace/redhat-operators-ppjrq" Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.343168 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df8602f3-84a0-42cf-99ac-fe547e004f25-utilities\") pod \"redhat-operators-ppjrq\" (UID: \"df8602f3-84a0-42cf-99ac-fe547e004f25\") " pod="openshift-marketplace/redhat-operators-ppjrq" Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.343256 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df8602f3-84a0-42cf-99ac-fe547e004f25-catalog-content\") pod \"redhat-operators-ppjrq\" (UID: \"df8602f3-84a0-42cf-99ac-fe547e004f25\") " pod="openshift-marketplace/redhat-operators-ppjrq" Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.444153 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df8602f3-84a0-42cf-99ac-fe547e004f25-utilities\") pod \"redhat-operators-ppjrq\" (UID: \"df8602f3-84a0-42cf-99ac-fe547e004f25\") " pod="openshift-marketplace/redhat-operators-ppjrq" Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.444850 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df8602f3-84a0-42cf-99ac-fe547e004f25-catalog-content\") pod \"redhat-operators-ppjrq\" (UID: \"df8602f3-84a0-42cf-99ac-fe547e004f25\") " pod="openshift-marketplace/redhat-operators-ppjrq" Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.444907 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n8cj\" (UniqueName: \"kubernetes.io/projected/df8602f3-84a0-42cf-99ac-fe547e004f25-kube-api-access-9n8cj\") pod \"redhat-operators-ppjrq\" (UID: \"df8602f3-84a0-42cf-99ac-fe547e004f25\") " pod="openshift-marketplace/redhat-operators-ppjrq" Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.445354 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df8602f3-84a0-42cf-99ac-fe547e004f25-utilities\") pod \"redhat-operators-ppjrq\" (UID: \"df8602f3-84a0-42cf-99ac-fe547e004f25\") " pod="openshift-marketplace/redhat-operators-ppjrq" Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.445391 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df8602f3-84a0-42cf-99ac-fe547e004f25-catalog-content\") pod \"redhat-operators-ppjrq\" (UID: \"df8602f3-84a0-42cf-99ac-fe547e004f25\") " pod="openshift-marketplace/redhat-operators-ppjrq" Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.467362 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n8cj\" (UniqueName: \"kubernetes.io/projected/df8602f3-84a0-42cf-99ac-fe547e004f25-kube-api-access-9n8cj\") pod \"redhat-operators-ppjrq\" (UID: \"df8602f3-84a0-42cf-99ac-fe547e004f25\") " pod="openshift-marketplace/redhat-operators-ppjrq" Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.505009 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t9rt4"] Jan 23 06:35:51 crc kubenswrapper[4937]: W0123 06:35:51.541509 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8f13571_ed06_4bcc_825f_8bc5915ab5a7.slice/crio-65a004f28cabe16540c2fb034fcf86d6e153c3924f4d85040b69721f93a76c9c WatchSource:0}: Error finding container 65a004f28cabe16540c2fb034fcf86d6e153c3924f4d85040b69721f93a76c9c: Status 404 returned error can't find the container with id 65a004f28cabe16540c2fb034fcf86d6e153c3924f4d85040b69721f93a76c9c Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.574734 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ppjrq" Jan 23 06:35:51 crc kubenswrapper[4937]: E0123 06:35:51.867476 4937 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8f13571_ed06_4bcc_825f_8bc5915ab5a7.slice/crio-conmon-93bad35010a127035bd8ca55f4e195fc212ffbcf499e44b2cf6fac2640178508.scope\": RecentStats: unable to find data in memory cache]" Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.904356 4937 patch_prober.go:28] interesting pod/router-default-5444994796-62pqr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:35:51 crc kubenswrapper[4937]: [-]has-synced failed: reason withheld Jan 23 06:35:51 crc kubenswrapper[4937]: [+]process-running ok Jan 23 06:35:51 crc kubenswrapper[4937]: healthz check failed Jan 23 06:35:51 crc kubenswrapper[4937]: I0123 06:35:51.904420 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62pqr" podUID="4059e769-c52c-44c3-88c0-35ab870cedf8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:35:52 crc kubenswrapper[4937]: I0123 06:35:52.029006 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ppjrq"] Jan 23 06:35:52 crc kubenswrapper[4937]: I0123 06:35:52.146205 4937 generic.go:334] "Generic (PLEG): container finished" podID="ea9ca5e0-3d00-45b5-b52d-74acee0081c9" containerID="bb0d45891f596d84f61ea2184c440c76a97a2fbde591a25475495be9b0c41a0c" exitCode=0 Jan 23 06:35:52 crc kubenswrapper[4937]: I0123 06:35:52.146455 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rg2gr" event={"ID":"ea9ca5e0-3d00-45b5-b52d-74acee0081c9","Type":"ContainerDied","Data":"bb0d45891f596d84f61ea2184c440c76a97a2fbde591a25475495be9b0c41a0c"} Jan 23 06:35:52 crc kubenswrapper[4937]: I0123 06:35:52.164814 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e415690c-f592-4007-9bc2-e70f789e688a","Type":"ContainerStarted","Data":"9819838d93aca8eee426f38ac0c594b13d2146f7ed9e7a5fa2b3a9a191fd11af"} Jan 23 06:35:52 crc kubenswrapper[4937]: I0123 06:35:52.185852 4937 generic.go:334] "Generic (PLEG): container finished" podID="f8f13571-ed06-4bcc-825f-8bc5915ab5a7" containerID="93bad35010a127035bd8ca55f4e195fc212ffbcf499e44b2cf6fac2640178508" exitCode=0 Jan 23 06:35:52 crc kubenswrapper[4937]: I0123 06:35:52.185930 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9rt4" event={"ID":"f8f13571-ed06-4bcc-825f-8bc5915ab5a7","Type":"ContainerDied","Data":"93bad35010a127035bd8ca55f4e195fc212ffbcf499e44b2cf6fac2640178508"} Jan 23 06:35:52 crc kubenswrapper[4937]: I0123 06:35:52.185963 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9rt4" event={"ID":"f8f13571-ed06-4bcc-825f-8bc5915ab5a7","Type":"ContainerStarted","Data":"65a004f28cabe16540c2fb034fcf86d6e153c3924f4d85040b69721f93a76c9c"} Jan 23 06:35:52 crc kubenswrapper[4937]: I0123 06:35:52.193139 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppjrq" event={"ID":"df8602f3-84a0-42cf-99ac-fe547e004f25","Type":"ContainerStarted","Data":"5254fba96f7944ddc8c1582addb8d82ad4093ff5c4dedb8fa655d721531494dc"} Jan 23 06:35:52 crc kubenswrapper[4937]: I0123 06:35:52.194460 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.19444319 podStartE2EDuration="3.19444319s" podCreationTimestamp="2026-01-23 06:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:35:52.192054131 +0000 UTC m=+151.995820784" watchObservedRunningTime="2026-01-23 06:35:52.19444319 +0000 UTC m=+151.998209833" Jan 23 06:35:52 crc kubenswrapper[4937]: I0123 06:35:52.195504 4937 generic.go:334] "Generic (PLEG): container finished" podID="ee63b0c9-aed6-4be4-9987-c27fe197911d" containerID="59af0abcc6f069d4bb88b5c7318223165323934f204f5aadbd38e4004b675f4f" exitCode=0 Jan 23 06:35:52 crc kubenswrapper[4937]: I0123 06:35:52.196233 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hs5k4" event={"ID":"ee63b0c9-aed6-4be4-9987-c27fe197911d","Type":"ContainerDied","Data":"59af0abcc6f069d4bb88b5c7318223165323934f204f5aadbd38e4004b675f4f"} Jan 23 06:35:52 crc kubenswrapper[4937]: I0123 06:35:52.900735 4937 patch_prober.go:28] interesting pod/router-default-5444994796-62pqr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 06:35:52 crc kubenswrapper[4937]: [-]has-synced failed: reason withheld Jan 23 06:35:52 crc kubenswrapper[4937]: [+]process-running ok Jan 23 06:35:52 crc kubenswrapper[4937]: healthz check failed Jan 23 06:35:52 crc kubenswrapper[4937]: I0123 06:35:52.900798 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-62pqr" podUID="4059e769-c52c-44c3-88c0-35ab870cedf8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 06:35:53 crc kubenswrapper[4937]: I0123 06:35:53.228945 4937 generic.go:334] "Generic (PLEG): container finished" podID="e415690c-f592-4007-9bc2-e70f789e688a" containerID="9819838d93aca8eee426f38ac0c594b13d2146f7ed9e7a5fa2b3a9a191fd11af" exitCode=0 Jan 23 06:35:53 crc kubenswrapper[4937]: I0123 06:35:53.229087 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e415690c-f592-4007-9bc2-e70f789e688a","Type":"ContainerDied","Data":"9819838d93aca8eee426f38ac0c594b13d2146f7ed9e7a5fa2b3a9a191fd11af"} Jan 23 06:35:53 crc kubenswrapper[4937]: I0123 06:35:53.240008 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppjrq" event={"ID":"df8602f3-84a0-42cf-99ac-fe547e004f25","Type":"ContainerDied","Data":"85c1bfc21413992556b3c6739acd237837245ce230acfcf3e5df934a19cfd26e"} Jan 23 06:35:53 crc kubenswrapper[4937]: I0123 06:35:53.239815 4937 generic.go:334] "Generic (PLEG): container finished" podID="df8602f3-84a0-42cf-99ac-fe547e004f25" containerID="85c1bfc21413992556b3c6739acd237837245ce230acfcf3e5df934a19cfd26e" exitCode=0 Jan 23 06:35:53 crc kubenswrapper[4937]: I0123 06:35:53.902807 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:53 crc kubenswrapper[4937]: I0123 06:35:53.908069 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-62pqr" Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.688824 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.724455 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 06:35:54 crc kubenswrapper[4937]: E0123 06:35:54.724855 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e415690c-f592-4007-9bc2-e70f789e688a" containerName="pruner" Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.724877 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="e415690c-f592-4007-9bc2-e70f789e688a" containerName="pruner" Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.725024 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="e415690c-f592-4007-9bc2-e70f789e688a" containerName="pruner" Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.725534 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.729802 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.731243 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.741042 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.747513 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e415690c-f592-4007-9bc2-e70f789e688a-kube-api-access\") pod \"e415690c-f592-4007-9bc2-e70f789e688a\" (UID: \"e415690c-f592-4007-9bc2-e70f789e688a\") " Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.747584 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e415690c-f592-4007-9bc2-e70f789e688a-kubelet-dir\") pod \"e415690c-f592-4007-9bc2-e70f789e688a\" (UID: \"e415690c-f592-4007-9bc2-e70f789e688a\") " Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.747739 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e415690c-f592-4007-9bc2-e70f789e688a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e415690c-f592-4007-9bc2-e70f789e688a" (UID: "e415690c-f592-4007-9bc2-e70f789e688a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.747916 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/736862bd-f37f-4b25-aeee-ed6622343ae8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"736862bd-f37f-4b25-aeee-ed6622343ae8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.747965 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/736862bd-f37f-4b25-aeee-ed6622343ae8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"736862bd-f37f-4b25-aeee-ed6622343ae8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.748041 4937 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e415690c-f592-4007-9bc2-e70f789e688a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.756419 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e415690c-f592-4007-9bc2-e70f789e688a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e415690c-f592-4007-9bc2-e70f789e688a" (UID: "e415690c-f592-4007-9bc2-e70f789e688a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.850364 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/736862bd-f37f-4b25-aeee-ed6622343ae8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"736862bd-f37f-4b25-aeee-ed6622343ae8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.850799 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/736862bd-f37f-4b25-aeee-ed6622343ae8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"736862bd-f37f-4b25-aeee-ed6622343ae8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.851027 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e415690c-f592-4007-9bc2-e70f789e688a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.851155 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/736862bd-f37f-4b25-aeee-ed6622343ae8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"736862bd-f37f-4b25-aeee-ed6622343ae8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.869412 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/736862bd-f37f-4b25-aeee-ed6622343ae8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"736862bd-f37f-4b25-aeee-ed6622343ae8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:35:54 crc kubenswrapper[4937]: I0123 06:35:54.977277 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:35:55 crc kubenswrapper[4937]: I0123 06:35:55.049694 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:35:55 crc kubenswrapper[4937]: I0123 06:35:55.120878 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-vvxbh" Jan 23 06:35:55 crc kubenswrapper[4937]: I0123 06:35:55.307877 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e415690c-f592-4007-9bc2-e70f789e688a","Type":"ContainerDied","Data":"8378344ed0f6da86e7bcd346a165dfb5a1051f9ea0570697fdc54b60f7b42067"} Jan 23 06:35:55 crc kubenswrapper[4937]: I0123 06:35:55.309231 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8378344ed0f6da86e7bcd346a165dfb5a1051f9ea0570697fdc54b60f7b42067" Jan 23 06:35:55 crc kubenswrapper[4937]: I0123 06:35:55.308839 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 06:35:55 crc kubenswrapper[4937]: I0123 06:35:55.433627 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 06:35:55 crc kubenswrapper[4937]: W0123 06:35:55.451009 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod736862bd_f37f_4b25_aeee_ed6622343ae8.slice/crio-bacd3858d1d01f20b81644e488c9ab84e241554b5fad20db708ded945b5a09c1 WatchSource:0}: Error finding container bacd3858d1d01f20b81644e488c9ab84e241554b5fad20db708ded945b5a09c1: Status 404 returned error can't find the container with id bacd3858d1d01f20b81644e488c9ab84e241554b5fad20db708ded945b5a09c1 Jan 23 06:35:56 crc kubenswrapper[4937]: I0123 06:35:56.333922 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"736862bd-f37f-4b25-aeee-ed6622343ae8","Type":"ContainerStarted","Data":"bacd3858d1d01f20b81644e488c9ab84e241554b5fad20db708ded945b5a09c1"} Jan 23 06:35:57 crc kubenswrapper[4937]: I0123 06:35:57.369523 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"736862bd-f37f-4b25-aeee-ed6622343ae8","Type":"ContainerStarted","Data":"8cd06ecc36bb6d8b6dc42122cf900acb419bb7dc58b221a7358ff5d4b3190cff"} Jan 23 06:35:58 crc kubenswrapper[4937]: I0123 06:35:58.385547 4937 generic.go:334] "Generic (PLEG): container finished" podID="736862bd-f37f-4b25-aeee-ed6622343ae8" containerID="8cd06ecc36bb6d8b6dc42122cf900acb419bb7dc58b221a7358ff5d4b3190cff" exitCode=0 Jan 23 06:35:58 crc kubenswrapper[4937]: I0123 06:35:58.385620 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"736862bd-f37f-4b25-aeee-ed6622343ae8","Type":"ContainerDied","Data":"8cd06ecc36bb6d8b6dc42122cf900acb419bb7dc58b221a7358ff5d4b3190cff"} Jan 23 06:35:59 crc kubenswrapper[4937]: I0123 06:35:59.185465 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-6g59k" Jan 23 06:35:59 crc kubenswrapper[4937]: I0123 06:35:59.251047 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:35:59 crc kubenswrapper[4937]: I0123 06:35:59.255146 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:36:04 crc kubenswrapper[4937]: I0123 06:36:04.677520 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs\") pod \"network-metrics-daemon-7ksbw\" (UID: \"5f394d17-1f72-43ba-8d51-b76e56dd6849\") " pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:36:04 crc kubenswrapper[4937]: I0123 06:36:04.688538 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f394d17-1f72-43ba-8d51-b76e56dd6849-metrics-certs\") pod \"network-metrics-daemon-7ksbw\" (UID: \"5f394d17-1f72-43ba-8d51-b76e56dd6849\") " pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:36:04 crc kubenswrapper[4937]: I0123 06:36:04.866221 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7ksbw" Jan 23 06:36:07 crc kubenswrapper[4937]: I0123 06:36:07.724312 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:36:07 crc kubenswrapper[4937]: I0123 06:36:07.724388 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:36:08 crc kubenswrapper[4937]: I0123 06:36:08.017978 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:36:08 crc kubenswrapper[4937]: I0123 06:36:08.370415 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:36:08 crc kubenswrapper[4937]: I0123 06:36:08.442745 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/736862bd-f37f-4b25-aeee-ed6622343ae8-kube-api-access\") pod \"736862bd-f37f-4b25-aeee-ed6622343ae8\" (UID: \"736862bd-f37f-4b25-aeee-ed6622343ae8\") " Jan 23 06:36:08 crc kubenswrapper[4937]: I0123 06:36:08.443328 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/736862bd-f37f-4b25-aeee-ed6622343ae8-kubelet-dir\") pod \"736862bd-f37f-4b25-aeee-ed6622343ae8\" (UID: \"736862bd-f37f-4b25-aeee-ed6622343ae8\") " Jan 23 06:36:08 crc kubenswrapper[4937]: I0123 06:36:08.443472 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/736862bd-f37f-4b25-aeee-ed6622343ae8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "736862bd-f37f-4b25-aeee-ed6622343ae8" (UID: "736862bd-f37f-4b25-aeee-ed6622343ae8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:36:08 crc kubenswrapper[4937]: I0123 06:36:08.443971 4937 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/736862bd-f37f-4b25-aeee-ed6622343ae8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:36:08 crc kubenswrapper[4937]: I0123 06:36:08.454866 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736862bd-f37f-4b25-aeee-ed6622343ae8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "736862bd-f37f-4b25-aeee-ed6622343ae8" (UID: "736862bd-f37f-4b25-aeee-ed6622343ae8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:36:08 crc kubenswrapper[4937]: I0123 06:36:08.469038 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"736862bd-f37f-4b25-aeee-ed6622343ae8","Type":"ContainerDied","Data":"bacd3858d1d01f20b81644e488c9ab84e241554b5fad20db708ded945b5a09c1"} Jan 23 06:36:08 crc kubenswrapper[4937]: I0123 06:36:08.469098 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bacd3858d1d01f20b81644e488c9ab84e241554b5fad20db708ded945b5a09c1" Jan 23 06:36:08 crc kubenswrapper[4937]: I0123 06:36:08.469188 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 06:36:08 crc kubenswrapper[4937]: I0123 06:36:08.545439 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/736862bd-f37f-4b25-aeee-ed6622343ae8-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 06:36:19 crc kubenswrapper[4937]: I0123 06:36:19.375453 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mvhd8" Jan 23 06:36:25 crc kubenswrapper[4937]: E0123 06:36:25.649642 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 06:36:25 crc kubenswrapper[4937]: E0123 06:36:25.650416 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhjxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-hs5k4_openshift-marketplace(ee63b0c9-aed6-4be4-9987-c27fe197911d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 06:36:25 crc kubenswrapper[4937]: E0123 06:36:25.651792 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-hs5k4" podUID="ee63b0c9-aed6-4be4-9987-c27fe197911d" Jan 23 06:36:25 crc kubenswrapper[4937]: E0123 06:36:25.730179 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 06:36:25 crc kubenswrapper[4937]: E0123 06:36:25.730436 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8h6cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rg2gr_openshift-marketplace(ea9ca5e0-3d00-45b5-b52d-74acee0081c9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 06:36:25 crc kubenswrapper[4937]: E0123 06:36:25.731660 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rg2gr" podUID="ea9ca5e0-3d00-45b5-b52d-74acee0081c9" Jan 23 06:36:25 crc kubenswrapper[4937]: E0123 06:36:25.744050 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 06:36:25 crc kubenswrapper[4937]: E0123 06:36:25.744221 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nhtqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vmr8l_openshift-marketplace(177bc81c-0876-451a-8d35-76d17ae3259a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 06:36:25 crc kubenswrapper[4937]: E0123 06:36:25.745405 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-vmr8l" podUID="177bc81c-0876-451a-8d35-76d17ae3259a" Jan 23 06:36:28 crc kubenswrapper[4937]: I0123 06:36:28.578814 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 06:36:29 crc kubenswrapper[4937]: E0123 06:36:29.148411 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vmr8l" podUID="177bc81c-0876-451a-8d35-76d17ae3259a" Jan 23 06:36:29 crc kubenswrapper[4937]: E0123 06:36:29.148411 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rg2gr" podUID="ea9ca5e0-3d00-45b5-b52d-74acee0081c9" Jan 23 06:36:29 crc kubenswrapper[4937]: E0123 06:36:29.148903 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-hs5k4" podUID="ee63b0c9-aed6-4be4-9987-c27fe197911d" Jan 23 06:36:29 crc kubenswrapper[4937]: E0123 06:36:29.239793 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 06:36:29 crc kubenswrapper[4937]: E0123 06:36:29.239793 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 06:36:29 crc kubenswrapper[4937]: E0123 06:36:29.239959 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zgfdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-t9rt4_openshift-marketplace(f8f13571-ed06-4bcc-825f-8bc5915ab5a7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 06:36:29 crc kubenswrapper[4937]: E0123 06:36:29.240011 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9n8cj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-ppjrq_openshift-marketplace(df8602f3-84a0-42cf-99ac-fe547e004f25): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 06:36:29 crc kubenswrapper[4937]: E0123 06:36:29.241182 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-t9rt4" podUID="f8f13571-ed06-4bcc-825f-8bc5915ab5a7" Jan 23 06:36:29 crc kubenswrapper[4937]: E0123 06:36:29.241773 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-ppjrq" podUID="df8602f3-84a0-42cf-99ac-fe547e004f25" Jan 23 06:36:30 crc kubenswrapper[4937]: I0123 06:36:30.115337 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 06:36:30 crc kubenswrapper[4937]: E0123 06:36:30.115977 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736862bd-f37f-4b25-aeee-ed6622343ae8" containerName="pruner" Jan 23 06:36:30 crc kubenswrapper[4937]: I0123 06:36:30.115991 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="736862bd-f37f-4b25-aeee-ed6622343ae8" containerName="pruner" Jan 23 06:36:30 crc kubenswrapper[4937]: I0123 06:36:30.116090 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="736862bd-f37f-4b25-aeee-ed6622343ae8" containerName="pruner" Jan 23 06:36:30 crc kubenswrapper[4937]: I0123 06:36:30.116497 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:36:30 crc kubenswrapper[4937]: I0123 06:36:30.121379 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 06:36:30 crc kubenswrapper[4937]: I0123 06:36:30.121701 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 06:36:30 crc kubenswrapper[4937]: I0123 06:36:30.144467 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 06:36:30 crc kubenswrapper[4937]: I0123 06:36:30.168549 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d17871a-d385-4406-922f-40281c667b38-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2d17871a-d385-4406-922f-40281c667b38\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:36:30 crc kubenswrapper[4937]: I0123 06:36:30.169196 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d17871a-d385-4406-922f-40281c667b38-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2d17871a-d385-4406-922f-40281c667b38\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:36:30 crc kubenswrapper[4937]: I0123 06:36:30.270501 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d17871a-d385-4406-922f-40281c667b38-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2d17871a-d385-4406-922f-40281c667b38\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:36:30 crc kubenswrapper[4937]: I0123 06:36:30.271051 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d17871a-d385-4406-922f-40281c667b38-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2d17871a-d385-4406-922f-40281c667b38\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:36:30 crc kubenswrapper[4937]: I0123 06:36:30.270645 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d17871a-d385-4406-922f-40281c667b38-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2d17871a-d385-4406-922f-40281c667b38\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:36:30 crc kubenswrapper[4937]: I0123 06:36:30.294362 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d17871a-d385-4406-922f-40281c667b38-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2d17871a-d385-4406-922f-40281c667b38\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:36:30 crc kubenswrapper[4937]: I0123 06:36:30.488282 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:36:30 crc kubenswrapper[4937]: E0123 06:36:30.723644 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-t9rt4" podUID="f8f13571-ed06-4bcc-825f-8bc5915ab5a7" Jan 23 06:36:30 crc kubenswrapper[4937]: E0123 06:36:30.724357 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-ppjrq" podUID="df8602f3-84a0-42cf-99ac-fe547e004f25" Jan 23 06:36:30 crc kubenswrapper[4937]: E0123 06:36:30.841023 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 06:36:30 crc kubenswrapper[4937]: E0123 06:36:30.841181 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrmxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-n5jg8_openshift-marketplace(82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 06:36:30 crc kubenswrapper[4937]: E0123 06:36:30.842425 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-n5jg8" podUID="82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3" Jan 23 06:36:30 crc kubenswrapper[4937]: E0123 06:36:30.849704 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 06:36:30 crc kubenswrapper[4937]: E0123 06:36:30.849958 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhj2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-742j8_openshift-marketplace(a5003ecc-816a-494a-b773-0edb552a55f3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 06:36:30 crc kubenswrapper[4937]: E0123 06:36:30.852699 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-742j8" podUID="a5003ecc-816a-494a-b773-0edb552a55f3" Jan 23 06:36:31 crc kubenswrapper[4937]: I0123 06:36:31.186792 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7ksbw"] Jan 23 06:36:31 crc kubenswrapper[4937]: I0123 06:36:31.192059 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 06:36:31 crc kubenswrapper[4937]: I0123 06:36:31.669110 4937 generic.go:334] "Generic (PLEG): container finished" podID="89c5c12d-911e-4ba4-b4ac-2121b26efdcb" containerID="bbb98caedd35369a4d01e929e5bd7647e08ac3de6771de5813a505162ac0cf2a" exitCode=0 Jan 23 06:36:31 crc kubenswrapper[4937]: I0123 06:36:31.669686 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qk5qp" event={"ID":"89c5c12d-911e-4ba4-b4ac-2121b26efdcb","Type":"ContainerDied","Data":"bbb98caedd35369a4d01e929e5bd7647e08ac3de6771de5813a505162ac0cf2a"} Jan 23 06:36:31 crc kubenswrapper[4937]: I0123 06:36:31.675764 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" event={"ID":"5f394d17-1f72-43ba-8d51-b76e56dd6849","Type":"ContainerStarted","Data":"00caf2690b81ed99a9679426b919311e5833cf7865e770404e1797f89683ff6e"} Jan 23 06:36:31 crc kubenswrapper[4937]: I0123 06:36:31.680112 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2d17871a-d385-4406-922f-40281c667b38","Type":"ContainerStarted","Data":"7d1aaeb5e9217eae743e1b2aed43424552a9c06ecff2cdc7fafef049504ec0cb"} Jan 23 06:36:31 crc kubenswrapper[4937]: E0123 06:36:31.681565 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-n5jg8" podUID="82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3" Jan 23 06:36:31 crc kubenswrapper[4937]: E0123 06:36:31.683353 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-742j8" podUID="a5003ecc-816a-494a-b773-0edb552a55f3" Jan 23 06:36:32 crc kubenswrapper[4937]: I0123 06:36:32.688075 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" event={"ID":"5f394d17-1f72-43ba-8d51-b76e56dd6849","Type":"ContainerStarted","Data":"7c53291fe48ae4facd58e56f8ac5cb2871c766def5805ebb1ff331297fd2a382"} Jan 23 06:36:32 crc kubenswrapper[4937]: I0123 06:36:32.688570 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7ksbw" event={"ID":"5f394d17-1f72-43ba-8d51-b76e56dd6849","Type":"ContainerStarted","Data":"cc06c9b2f8afb83894825654f79537bc3aa8316180b9e851e9ec0d19e3f2a397"} Jan 23 06:36:32 crc kubenswrapper[4937]: I0123 06:36:32.692976 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qk5qp" event={"ID":"89c5c12d-911e-4ba4-b4ac-2121b26efdcb","Type":"ContainerStarted","Data":"8f97a13b89840ea0d9fdf1f6abb45a940039aeb401ccee052994c0b514bbc858"} Jan 23 06:36:32 crc kubenswrapper[4937]: I0123 06:36:32.698392 4937 generic.go:334] "Generic (PLEG): container finished" podID="2d17871a-d385-4406-922f-40281c667b38" containerID="3f9048b05209db16f7a9ee627bcc7f1708c6a023ab139d11118877c41498aa70" exitCode=0 Jan 23 06:36:32 crc kubenswrapper[4937]: I0123 06:36:32.698458 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2d17871a-d385-4406-922f-40281c667b38","Type":"ContainerDied","Data":"3f9048b05209db16f7a9ee627bcc7f1708c6a023ab139d11118877c41498aa70"} Jan 23 06:36:32 crc kubenswrapper[4937]: I0123 06:36:32.706286 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-7ksbw" podStartSLOduration=171.706262248 podStartE2EDuration="2m51.706262248s" podCreationTimestamp="2026-01-23 06:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:36:32.705507256 +0000 UTC m=+192.509273909" watchObservedRunningTime="2026-01-23 06:36:32.706262248 +0000 UTC m=+192.510028921" Jan 23 06:36:32 crc kubenswrapper[4937]: I0123 06:36:32.723431 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qk5qp" podStartSLOduration=2.563209442 podStartE2EDuration="45.723351168s" podCreationTimestamp="2026-01-23 06:35:47 +0000 UTC" firstStartedPulling="2026-01-23 06:35:48.988371456 +0000 UTC m=+148.792138109" lastFinishedPulling="2026-01-23 06:36:32.148513182 +0000 UTC m=+191.952279835" observedRunningTime="2026-01-23 06:36:32.721254728 +0000 UTC m=+192.525021411" watchObservedRunningTime="2026-01-23 06:36:32.723351168 +0000 UTC m=+192.527117821" Jan 23 06:36:34 crc kubenswrapper[4937]: I0123 06:36:34.071380 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:36:34 crc kubenswrapper[4937]: I0123 06:36:34.142921 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d17871a-d385-4406-922f-40281c667b38-kubelet-dir\") pod \"2d17871a-d385-4406-922f-40281c667b38\" (UID: \"2d17871a-d385-4406-922f-40281c667b38\") " Jan 23 06:36:34 crc kubenswrapper[4937]: I0123 06:36:34.143035 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d17871a-d385-4406-922f-40281c667b38-kube-api-access\") pod \"2d17871a-d385-4406-922f-40281c667b38\" (UID: \"2d17871a-d385-4406-922f-40281c667b38\") " Jan 23 06:36:34 crc kubenswrapper[4937]: I0123 06:36:34.143090 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d17871a-d385-4406-922f-40281c667b38-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2d17871a-d385-4406-922f-40281c667b38" (UID: "2d17871a-d385-4406-922f-40281c667b38"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:36:34 crc kubenswrapper[4937]: I0123 06:36:34.143386 4937 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d17871a-d385-4406-922f-40281c667b38-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:36:34 crc kubenswrapper[4937]: I0123 06:36:34.156293 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d17871a-d385-4406-922f-40281c667b38-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2d17871a-d385-4406-922f-40281c667b38" (UID: "2d17871a-d385-4406-922f-40281c667b38"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:36:34 crc kubenswrapper[4937]: I0123 06:36:34.245118 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d17871a-d385-4406-922f-40281c667b38-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 06:36:34 crc kubenswrapper[4937]: I0123 06:36:34.715127 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2d17871a-d385-4406-922f-40281c667b38","Type":"ContainerDied","Data":"7d1aaeb5e9217eae743e1b2aed43424552a9c06ecff2cdc7fafef049504ec0cb"} Jan 23 06:36:34 crc kubenswrapper[4937]: I0123 06:36:34.715179 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 06:36:34 crc kubenswrapper[4937]: I0123 06:36:34.715181 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d1aaeb5e9217eae743e1b2aed43424552a9c06ecff2cdc7fafef049504ec0cb" Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.108890 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 06:36:37 crc kubenswrapper[4937]: E0123 06:36:37.118769 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d17871a-d385-4406-922f-40281c667b38" containerName="pruner" Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.118811 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d17871a-d385-4406-922f-40281c667b38" containerName="pruner" Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.118991 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d17871a-d385-4406-922f-40281c667b38" containerName="pruner" Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.119644 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.126171 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.126558 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.133405 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.186933 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ed93f4b-59c0-4118-88b4-285140369236-var-lock\") pod \"installer-9-crc\" (UID: \"5ed93f4b-59c0-4118-88b4-285140369236\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.187156 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ed93f4b-59c0-4118-88b4-285140369236-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5ed93f4b-59c0-4118-88b4-285140369236\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.187213 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ed93f4b-59c0-4118-88b4-285140369236-kube-api-access\") pod \"installer-9-crc\" (UID: \"5ed93f4b-59c0-4118-88b4-285140369236\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.289663 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ed93f4b-59c0-4118-88b4-285140369236-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5ed93f4b-59c0-4118-88b4-285140369236\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.289838 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ed93f4b-59c0-4118-88b4-285140369236-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5ed93f4b-59c0-4118-88b4-285140369236\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.289892 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ed93f4b-59c0-4118-88b4-285140369236-kube-api-access\") pod \"installer-9-crc\" (UID: \"5ed93f4b-59c0-4118-88b4-285140369236\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.290797 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ed93f4b-59c0-4118-88b4-285140369236-var-lock\") pod \"installer-9-crc\" (UID: \"5ed93f4b-59c0-4118-88b4-285140369236\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.290876 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ed93f4b-59c0-4118-88b4-285140369236-var-lock\") pod \"installer-9-crc\" (UID: \"5ed93f4b-59c0-4118-88b4-285140369236\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.312201 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ed93f4b-59c0-4118-88b4-285140369236-kube-api-access\") pod \"installer-9-crc\" (UID: \"5ed93f4b-59c0-4118-88b4-285140369236\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.451855 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.724904 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.725394 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:36:37 crc kubenswrapper[4937]: I0123 06:36:37.954345 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 06:36:37 crc kubenswrapper[4937]: W0123 06:36:37.960728 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5ed93f4b_59c0_4118_88b4_285140369236.slice/crio-fd66381b75448a8612d1d0e1d0ab40955d044b905916163c914cf41ab803906a WatchSource:0}: Error finding container fd66381b75448a8612d1d0e1d0ab40955d044b905916163c914cf41ab803906a: Status 404 returned error can't find the container with id fd66381b75448a8612d1d0e1d0ab40955d044b905916163c914cf41ab803906a Jan 23 06:36:38 crc kubenswrapper[4937]: I0123 06:36:38.129856 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qk5qp" Jan 23 06:36:38 crc kubenswrapper[4937]: I0123 06:36:38.129908 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qk5qp" Jan 23 06:36:38 crc kubenswrapper[4937]: I0123 06:36:38.195188 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qk5qp" Jan 23 06:36:38 crc kubenswrapper[4937]: I0123 06:36:38.743196 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5ed93f4b-59c0-4118-88b4-285140369236","Type":"ContainerStarted","Data":"fd66381b75448a8612d1d0e1d0ab40955d044b905916163c914cf41ab803906a"} Jan 23 06:36:38 crc kubenswrapper[4937]: I0123 06:36:38.783653 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qk5qp" Jan 23 06:36:39 crc kubenswrapper[4937]: I0123 06:36:39.750898 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5ed93f4b-59c0-4118-88b4-285140369236","Type":"ContainerStarted","Data":"e9996af549fe7d4ba78c93d4b6a5eb9b7a3d160091b6f8c2160782f4b2bc872c"} Jan 23 06:36:39 crc kubenswrapper[4937]: I0123 06:36:39.774340 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.774309017 podStartE2EDuration="2.774309017s" podCreationTimestamp="2026-01-23 06:36:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:36:39.767581771 +0000 UTC m=+199.571348434" watchObservedRunningTime="2026-01-23 06:36:39.774309017 +0000 UTC m=+199.578075680" Jan 23 06:36:43 crc kubenswrapper[4937]: I0123 06:36:43.774372 4937 generic.go:334] "Generic (PLEG): container finished" podID="177bc81c-0876-451a-8d35-76d17ae3259a" containerID="ac95cc929132e9a3f7ece90a4f72a5e700eeafbd7d2ba8e63f8b41720b9439e9" exitCode=0 Jan 23 06:36:43 crc kubenswrapper[4937]: I0123 06:36:43.774468 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmr8l" event={"ID":"177bc81c-0876-451a-8d35-76d17ae3259a","Type":"ContainerDied","Data":"ac95cc929132e9a3f7ece90a4f72a5e700eeafbd7d2ba8e63f8b41720b9439e9"} Jan 23 06:36:43 crc kubenswrapper[4937]: I0123 06:36:43.778684 4937 generic.go:334] "Generic (PLEG): container finished" podID="ee63b0c9-aed6-4be4-9987-c27fe197911d" containerID="5565237843dcf1f84efb90ccd7d87e9dff37650c42d10e025080a1c45bd0dc8d" exitCode=0 Jan 23 06:36:43 crc kubenswrapper[4937]: I0123 06:36:43.778793 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hs5k4" event={"ID":"ee63b0c9-aed6-4be4-9987-c27fe197911d","Type":"ContainerDied","Data":"5565237843dcf1f84efb90ccd7d87e9dff37650c42d10e025080a1c45bd0dc8d"} Jan 23 06:36:43 crc kubenswrapper[4937]: I0123 06:36:43.784128 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppjrq" event={"ID":"df8602f3-84a0-42cf-99ac-fe547e004f25","Type":"ContainerStarted","Data":"dac6efff1426041abaa622a97b906e9a1550e112b2fd1eef52008b0e4162de0b"} Jan 23 06:36:43 crc kubenswrapper[4937]: I0123 06:36:43.788108 4937 generic.go:334] "Generic (PLEG): container finished" podID="ea9ca5e0-3d00-45b5-b52d-74acee0081c9" containerID="5dfd2696ad5f690db6c8fee0a5fbf34c968b08ed8dcee92816f584e0542b6752" exitCode=0 Jan 23 06:36:43 crc kubenswrapper[4937]: I0123 06:36:43.788161 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rg2gr" event={"ID":"ea9ca5e0-3d00-45b5-b52d-74acee0081c9","Type":"ContainerDied","Data":"5dfd2696ad5f690db6c8fee0a5fbf34c968b08ed8dcee92816f584e0542b6752"} Jan 23 06:36:44 crc kubenswrapper[4937]: I0123 06:36:44.796380 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hs5k4" event={"ID":"ee63b0c9-aed6-4be4-9987-c27fe197911d","Type":"ContainerStarted","Data":"f1c3ba9425ef2296b960ac2637cf0b6ca561bf0a2d916ad9d324c21f427ee785"} Jan 23 06:36:44 crc kubenswrapper[4937]: I0123 06:36:44.800374 4937 generic.go:334] "Generic (PLEG): container finished" podID="df8602f3-84a0-42cf-99ac-fe547e004f25" containerID="dac6efff1426041abaa622a97b906e9a1550e112b2fd1eef52008b0e4162de0b" exitCode=0 Jan 23 06:36:44 crc kubenswrapper[4937]: I0123 06:36:44.800551 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppjrq" event={"ID":"df8602f3-84a0-42cf-99ac-fe547e004f25","Type":"ContainerDied","Data":"dac6efff1426041abaa622a97b906e9a1550e112b2fd1eef52008b0e4162de0b"} Jan 23 06:36:44 crc kubenswrapper[4937]: I0123 06:36:44.802621 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-742j8" event={"ID":"a5003ecc-816a-494a-b773-0edb552a55f3","Type":"ContainerStarted","Data":"2f1363c18cddb58037598afc012f89ca3e9df6fafd7c4f1bed5455ffbcf7497d"} Jan 23 06:36:44 crc kubenswrapper[4937]: I0123 06:36:44.805103 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rg2gr" event={"ID":"ea9ca5e0-3d00-45b5-b52d-74acee0081c9","Type":"ContainerStarted","Data":"968b9be7387cb8cbb8e1042436e2b5e47e5c3a093505abb8e66e3cbfc36bcd21"} Jan 23 06:36:44 crc kubenswrapper[4937]: I0123 06:36:44.808014 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9rt4" event={"ID":"f8f13571-ed06-4bcc-825f-8bc5915ab5a7","Type":"ContainerStarted","Data":"80225fb15d29d93badbcd3ae88f2a1ffc2ca11d2eef59defe5301c713fec8674"} Jan 23 06:36:44 crc kubenswrapper[4937]: I0123 06:36:44.826199 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmr8l" event={"ID":"177bc81c-0876-451a-8d35-76d17ae3259a","Type":"ContainerStarted","Data":"e2f2b64769150876fd4f5cd8304302828445e2d1324a354ac72feac891be2345"} Jan 23 06:36:44 crc kubenswrapper[4937]: I0123 06:36:44.838645 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hs5k4" podStartSLOduration=3.772973573 podStartE2EDuration="55.83860457s" podCreationTimestamp="2026-01-23 06:35:49 +0000 UTC" firstStartedPulling="2026-01-23 06:35:52.198451965 +0000 UTC m=+152.002218618" lastFinishedPulling="2026-01-23 06:36:44.264082962 +0000 UTC m=+204.067849615" observedRunningTime="2026-01-23 06:36:44.838301741 +0000 UTC m=+204.642068394" watchObservedRunningTime="2026-01-23 06:36:44.83860457 +0000 UTC m=+204.642371223" Jan 23 06:36:44 crc kubenswrapper[4937]: I0123 06:36:44.864034 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vmr8l" podStartSLOduration=2.659559177 podStartE2EDuration="56.864006952s" podCreationTimestamp="2026-01-23 06:35:48 +0000 UTC" firstStartedPulling="2026-01-23 06:35:50.039072148 +0000 UTC m=+149.842838821" lastFinishedPulling="2026-01-23 06:36:44.243519943 +0000 UTC m=+204.047286596" observedRunningTime="2026-01-23 06:36:44.859984731 +0000 UTC m=+204.663751384" watchObservedRunningTime="2026-01-23 06:36:44.864006952 +0000 UTC m=+204.667773605" Jan 23 06:36:44 crc kubenswrapper[4937]: I0123 06:36:44.875748 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rg2gr" podStartSLOduration=2.739322801 podStartE2EDuration="54.875727067s" podCreationTimestamp="2026-01-23 06:35:50 +0000 UTC" firstStartedPulling="2026-01-23 06:35:52.159419994 +0000 UTC m=+151.963186647" lastFinishedPulling="2026-01-23 06:36:44.29582426 +0000 UTC m=+204.099590913" observedRunningTime="2026-01-23 06:36:44.874865042 +0000 UTC m=+204.678631705" watchObservedRunningTime="2026-01-23 06:36:44.875727067 +0000 UTC m=+204.679493710" Jan 23 06:36:45 crc kubenswrapper[4937]: I0123 06:36:45.834975 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppjrq" event={"ID":"df8602f3-84a0-42cf-99ac-fe547e004f25","Type":"ContainerStarted","Data":"50ed0759a9957203b768e817445403a65e3637255645d366f8eabde26a3e3c9c"} Jan 23 06:36:45 crc kubenswrapper[4937]: I0123 06:36:45.837476 4937 generic.go:334] "Generic (PLEG): container finished" podID="a5003ecc-816a-494a-b773-0edb552a55f3" containerID="2f1363c18cddb58037598afc012f89ca3e9df6fafd7c4f1bed5455ffbcf7497d" exitCode=0 Jan 23 06:36:45 crc kubenswrapper[4937]: I0123 06:36:45.837531 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-742j8" event={"ID":"a5003ecc-816a-494a-b773-0edb552a55f3","Type":"ContainerDied","Data":"2f1363c18cddb58037598afc012f89ca3e9df6fafd7c4f1bed5455ffbcf7497d"} Jan 23 06:36:45 crc kubenswrapper[4937]: I0123 06:36:45.839489 4937 generic.go:334] "Generic (PLEG): container finished" podID="f8f13571-ed06-4bcc-825f-8bc5915ab5a7" containerID="80225fb15d29d93badbcd3ae88f2a1ffc2ca11d2eef59defe5301c713fec8674" exitCode=0 Jan 23 06:36:45 crc kubenswrapper[4937]: I0123 06:36:45.839525 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9rt4" event={"ID":"f8f13571-ed06-4bcc-825f-8bc5915ab5a7","Type":"ContainerDied","Data":"80225fb15d29d93badbcd3ae88f2a1ffc2ca11d2eef59defe5301c713fec8674"} Jan 23 06:36:45 crc kubenswrapper[4937]: I0123 06:36:45.859960 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ppjrq" podStartSLOduration=2.839073855 podStartE2EDuration="54.859934325s" podCreationTimestamp="2026-01-23 06:35:51 +0000 UTC" firstStartedPulling="2026-01-23 06:35:53.243982638 +0000 UTC m=+153.047749291" lastFinishedPulling="2026-01-23 06:36:45.264843108 +0000 UTC m=+205.068609761" observedRunningTime="2026-01-23 06:36:45.858150525 +0000 UTC m=+205.661917188" watchObservedRunningTime="2026-01-23 06:36:45.859934325 +0000 UTC m=+205.663700978" Jan 23 06:36:46 crc kubenswrapper[4937]: I0123 06:36:46.850055 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9rt4" event={"ID":"f8f13571-ed06-4bcc-825f-8bc5915ab5a7","Type":"ContainerStarted","Data":"183bdc0370b31ecb970cbc49ba456485a41bfbbb0d32c9cb58b94a2f27fc118a"} Jan 23 06:36:46 crc kubenswrapper[4937]: I0123 06:36:46.858298 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-742j8" event={"ID":"a5003ecc-816a-494a-b773-0edb552a55f3","Type":"ContainerStarted","Data":"7b683fcd6ea0ef62d4427e07e323fdfabe3c21b592fb6dd8ea31543e4afeac44"} Jan 23 06:36:46 crc kubenswrapper[4937]: I0123 06:36:46.877046 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t9rt4" podStartSLOduration=2.6562225550000003 podStartE2EDuration="56.877024062s" podCreationTimestamp="2026-01-23 06:35:50 +0000 UTC" firstStartedPulling="2026-01-23 06:35:52.189051445 +0000 UTC m=+151.992818098" lastFinishedPulling="2026-01-23 06:36:46.409852952 +0000 UTC m=+206.213619605" observedRunningTime="2026-01-23 06:36:46.870549742 +0000 UTC m=+206.674316405" watchObservedRunningTime="2026-01-23 06:36:46.877024062 +0000 UTC m=+206.680790715" Jan 23 06:36:48 crc kubenswrapper[4937]: I0123 06:36:48.485373 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vmr8l" Jan 23 06:36:48 crc kubenswrapper[4937]: I0123 06:36:48.485926 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vmr8l" Jan 23 06:36:48 crc kubenswrapper[4937]: I0123 06:36:48.486948 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-742j8" Jan 23 06:36:48 crc kubenswrapper[4937]: I0123 06:36:48.487164 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-742j8" Jan 23 06:36:48 crc kubenswrapper[4937]: I0123 06:36:48.585528 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vmr8l" Jan 23 06:36:48 crc kubenswrapper[4937]: I0123 06:36:48.612439 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-742j8" podStartSLOduration=5.282302515 podStartE2EDuration="1m1.612419074s" podCreationTimestamp="2026-01-23 06:35:47 +0000 UTC" firstStartedPulling="2026-01-23 06:35:50.040683054 +0000 UTC m=+149.844449697" lastFinishedPulling="2026-01-23 06:36:46.370799603 +0000 UTC m=+206.174566256" observedRunningTime="2026-01-23 06:36:46.89505515 +0000 UTC m=+206.698821803" watchObservedRunningTime="2026-01-23 06:36:48.612419074 +0000 UTC m=+208.416185727" Jan 23 06:36:49 crc kubenswrapper[4937]: I0123 06:36:49.545458 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-742j8" podUID="a5003ecc-816a-494a-b773-0edb552a55f3" containerName="registry-server" probeResult="failure" output=< Jan 23 06:36:49 crc kubenswrapper[4937]: timeout: failed to connect service ":50051" within 1s Jan 23 06:36:49 crc kubenswrapper[4937]: > Jan 23 06:36:50 crc kubenswrapper[4937]: I0123 06:36:50.196758 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hs5k4" Jan 23 06:36:50 crc kubenswrapper[4937]: I0123 06:36:50.196853 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hs5k4" Jan 23 06:36:50 crc kubenswrapper[4937]: I0123 06:36:50.250641 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hs5k4" Jan 23 06:36:50 crc kubenswrapper[4937]: I0123 06:36:50.611281 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rg2gr" Jan 23 06:36:50 crc kubenswrapper[4937]: I0123 06:36:50.611340 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rg2gr" Jan 23 06:36:50 crc kubenswrapper[4937]: I0123 06:36:50.664405 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rg2gr" Jan 23 06:36:50 crc kubenswrapper[4937]: I0123 06:36:50.924956 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hs5k4" Jan 23 06:36:50 crc kubenswrapper[4937]: I0123 06:36:50.935460 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rg2gr" Jan 23 06:36:51 crc kubenswrapper[4937]: I0123 06:36:51.096492 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t9rt4" Jan 23 06:36:51 crc kubenswrapper[4937]: I0123 06:36:51.096579 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t9rt4" Jan 23 06:36:51 crc kubenswrapper[4937]: I0123 06:36:51.575041 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ppjrq" Jan 23 06:36:51 crc kubenswrapper[4937]: I0123 06:36:51.575341 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ppjrq" Jan 23 06:36:52 crc kubenswrapper[4937]: I0123 06:36:52.149023 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t9rt4" podUID="f8f13571-ed06-4bcc-825f-8bc5915ab5a7" containerName="registry-server" probeResult="failure" output=< Jan 23 06:36:52 crc kubenswrapper[4937]: timeout: failed to connect service ":50051" within 1s Jan 23 06:36:52 crc kubenswrapper[4937]: > Jan 23 06:36:52 crc kubenswrapper[4937]: I0123 06:36:52.622134 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ppjrq" podUID="df8602f3-84a0-42cf-99ac-fe547e004f25" containerName="registry-server" probeResult="failure" output=< Jan 23 06:36:52 crc kubenswrapper[4937]: timeout: failed to connect service ":50051" within 1s Jan 23 06:36:52 crc kubenswrapper[4937]: > Jan 23 06:36:53 crc kubenswrapper[4937]: I0123 06:36:53.835101 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rg2gr"] Jan 23 06:36:53 crc kubenswrapper[4937]: I0123 06:36:53.836354 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rg2gr" podUID="ea9ca5e0-3d00-45b5-b52d-74acee0081c9" containerName="registry-server" containerID="cri-o://968b9be7387cb8cbb8e1042436e2b5e47e5c3a093505abb8e66e3cbfc36bcd21" gracePeriod=2 Jan 23 06:36:55 crc kubenswrapper[4937]: I0123 06:36:55.913358 4937 generic.go:334] "Generic (PLEG): container finished" podID="ea9ca5e0-3d00-45b5-b52d-74acee0081c9" containerID="968b9be7387cb8cbb8e1042436e2b5e47e5c3a093505abb8e66e3cbfc36bcd21" exitCode=0 Jan 23 06:36:55 crc kubenswrapper[4937]: I0123 06:36:55.913420 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rg2gr" event={"ID":"ea9ca5e0-3d00-45b5-b52d-74acee0081c9","Type":"ContainerDied","Data":"968b9be7387cb8cbb8e1042436e2b5e47e5c3a093505abb8e66e3cbfc36bcd21"} Jan 23 06:36:56 crc kubenswrapper[4937]: I0123 06:36:56.923833 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rg2gr" event={"ID":"ea9ca5e0-3d00-45b5-b52d-74acee0081c9","Type":"ContainerDied","Data":"e715b5d7d3939d9835a72bdddfc7358eddbae8447d647f5f69008bd27a656b04"} Jan 23 06:36:56 crc kubenswrapper[4937]: I0123 06:36:56.924477 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e715b5d7d3939d9835a72bdddfc7358eddbae8447d647f5f69008bd27a656b04" Jan 23 06:36:56 crc kubenswrapper[4937]: I0123 06:36:56.938333 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rg2gr" Jan 23 06:36:57 crc kubenswrapper[4937]: I0123 06:36:57.014893 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9ca5e0-3d00-45b5-b52d-74acee0081c9-utilities\") pod \"ea9ca5e0-3d00-45b5-b52d-74acee0081c9\" (UID: \"ea9ca5e0-3d00-45b5-b52d-74acee0081c9\") " Jan 23 06:36:57 crc kubenswrapper[4937]: I0123 06:36:57.015526 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9ca5e0-3d00-45b5-b52d-74acee0081c9-catalog-content\") pod \"ea9ca5e0-3d00-45b5-b52d-74acee0081c9\" (UID: \"ea9ca5e0-3d00-45b5-b52d-74acee0081c9\") " Jan 23 06:36:57 crc kubenswrapper[4937]: I0123 06:36:57.016826 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea9ca5e0-3d00-45b5-b52d-74acee0081c9-utilities" (OuterVolumeSpecName: "utilities") pod "ea9ca5e0-3d00-45b5-b52d-74acee0081c9" (UID: "ea9ca5e0-3d00-45b5-b52d-74acee0081c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:36:57 crc kubenswrapper[4937]: I0123 06:36:57.020832 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h6cp\" (UniqueName: \"kubernetes.io/projected/ea9ca5e0-3d00-45b5-b52d-74acee0081c9-kube-api-access-8h6cp\") pod \"ea9ca5e0-3d00-45b5-b52d-74acee0081c9\" (UID: \"ea9ca5e0-3d00-45b5-b52d-74acee0081c9\") " Jan 23 06:36:57 crc kubenswrapper[4937]: I0123 06:36:57.021563 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9ca5e0-3d00-45b5-b52d-74acee0081c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:36:57 crc kubenswrapper[4937]: I0123 06:36:57.030849 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea9ca5e0-3d00-45b5-b52d-74acee0081c9-kube-api-access-8h6cp" (OuterVolumeSpecName: "kube-api-access-8h6cp") pod "ea9ca5e0-3d00-45b5-b52d-74acee0081c9" (UID: "ea9ca5e0-3d00-45b5-b52d-74acee0081c9"). InnerVolumeSpecName "kube-api-access-8h6cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:36:57 crc kubenswrapper[4937]: I0123 06:36:57.046866 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea9ca5e0-3d00-45b5-b52d-74acee0081c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea9ca5e0-3d00-45b5-b52d-74acee0081c9" (UID: "ea9ca5e0-3d00-45b5-b52d-74acee0081c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:36:57 crc kubenswrapper[4937]: I0123 06:36:57.122886 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9ca5e0-3d00-45b5-b52d-74acee0081c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:36:57 crc kubenswrapper[4937]: I0123 06:36:57.122952 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h6cp\" (UniqueName: \"kubernetes.io/projected/ea9ca5e0-3d00-45b5-b52d-74acee0081c9-kube-api-access-8h6cp\") on node \"crc\" DevicePath \"\"" Jan 23 06:36:57 crc kubenswrapper[4937]: I0123 06:36:57.934435 4937 generic.go:334] "Generic (PLEG): container finished" podID="82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3" containerID="6fc47931885b7cc5efe874fdfc2e57bad547dd0d10ebbf25671397bb64d735cc" exitCode=0 Jan 23 06:36:57 crc kubenswrapper[4937]: I0123 06:36:57.934499 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n5jg8" event={"ID":"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3","Type":"ContainerDied","Data":"6fc47931885b7cc5efe874fdfc2e57bad547dd0d10ebbf25671397bb64d735cc"} Jan 23 06:36:57 crc kubenswrapper[4937]: I0123 06:36:57.934577 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rg2gr" Jan 23 06:36:57 crc kubenswrapper[4937]: I0123 06:36:57.991374 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rg2gr"] Jan 23 06:36:58 crc kubenswrapper[4937]: I0123 06:36:58.006990 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rg2gr"] Jan 23 06:36:58 crc kubenswrapper[4937]: I0123 06:36:58.536296 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea9ca5e0-3d00-45b5-b52d-74acee0081c9" path="/var/lib/kubelet/pods/ea9ca5e0-3d00-45b5-b52d-74acee0081c9/volumes" Jan 23 06:36:58 crc kubenswrapper[4937]: I0123 06:36:58.542963 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vmr8l" Jan 23 06:36:58 crc kubenswrapper[4937]: I0123 06:36:58.547428 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-742j8" Jan 23 06:36:58 crc kubenswrapper[4937]: I0123 06:36:58.592525 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-742j8" Jan 23 06:36:59 crc kubenswrapper[4937]: I0123 06:36:59.953481 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n5jg8" event={"ID":"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3","Type":"ContainerStarted","Data":"05781add598dc5ac9a83f92c848f004318436b5792f71b9e51f62791a83fa8b3"} Jan 23 06:36:59 crc kubenswrapper[4937]: I0123 06:36:59.984023 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n5jg8" podStartSLOduration=4.172910023 podStartE2EDuration="1m12.983976092s" podCreationTimestamp="2026-01-23 06:35:47 +0000 UTC" firstStartedPulling="2026-01-23 06:35:50.057570069 +0000 UTC m=+149.861336722" lastFinishedPulling="2026-01-23 06:36:58.868636108 +0000 UTC m=+218.672402791" observedRunningTime="2026-01-23 06:36:59.981886865 +0000 UTC m=+219.785653518" watchObservedRunningTime="2026-01-23 06:36:59.983976092 +0000 UTC m=+219.787742745" Jan 23 06:37:00 crc kubenswrapper[4937]: I0123 06:37:00.838269 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vmr8l"] Jan 23 06:37:00 crc kubenswrapper[4937]: I0123 06:37:00.839110 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vmr8l" podUID="177bc81c-0876-451a-8d35-76d17ae3259a" containerName="registry-server" containerID="cri-o://e2f2b64769150876fd4f5cd8304302828445e2d1324a354ac72feac891be2345" gracePeriod=2 Jan 23 06:37:01 crc kubenswrapper[4937]: I0123 06:37:01.165434 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t9rt4" Jan 23 06:37:01 crc kubenswrapper[4937]: I0123 06:37:01.231071 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t9rt4" Jan 23 06:37:01 crc kubenswrapper[4937]: I0123 06:37:01.653696 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ppjrq" Jan 23 06:37:01 crc kubenswrapper[4937]: I0123 06:37:01.709452 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ppjrq" Jan 23 06:37:03 crc kubenswrapper[4937]: I0123 06:37:03.832977 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ppjrq"] Jan 23 06:37:03 crc kubenswrapper[4937]: I0123 06:37:03.833263 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ppjrq" podUID="df8602f3-84a0-42cf-99ac-fe547e004f25" containerName="registry-server" containerID="cri-o://50ed0759a9957203b768e817445403a65e3637255645d366f8eabde26a3e3c9c" gracePeriod=2 Jan 23 06:37:03 crc kubenswrapper[4937]: I0123 06:37:03.984230 4937 generic.go:334] "Generic (PLEG): container finished" podID="177bc81c-0876-451a-8d35-76d17ae3259a" containerID="e2f2b64769150876fd4f5cd8304302828445e2d1324a354ac72feac891be2345" exitCode=0 Jan 23 06:37:03 crc kubenswrapper[4937]: I0123 06:37:03.984299 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmr8l" event={"ID":"177bc81c-0876-451a-8d35-76d17ae3259a","Type":"ContainerDied","Data":"e2f2b64769150876fd4f5cd8304302828445e2d1324a354ac72feac891be2345"} Jan 23 06:37:04 crc kubenswrapper[4937]: I0123 06:37:04.996640 4937 generic.go:334] "Generic (PLEG): container finished" podID="df8602f3-84a0-42cf-99ac-fe547e004f25" containerID="50ed0759a9957203b768e817445403a65e3637255645d366f8eabde26a3e3c9c" exitCode=0 Jan 23 06:37:04 crc kubenswrapper[4937]: I0123 06:37:04.997204 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppjrq" event={"ID":"df8602f3-84a0-42cf-99ac-fe547e004f25","Type":"ContainerDied","Data":"50ed0759a9957203b768e817445403a65e3637255645d366f8eabde26a3e3c9c"} Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.164390 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmr8l" Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.179686 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ppjrq" Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.254796 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhtqm\" (UniqueName: \"kubernetes.io/projected/177bc81c-0876-451a-8d35-76d17ae3259a-kube-api-access-nhtqm\") pod \"177bc81c-0876-451a-8d35-76d17ae3259a\" (UID: \"177bc81c-0876-451a-8d35-76d17ae3259a\") " Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.254911 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n8cj\" (UniqueName: \"kubernetes.io/projected/df8602f3-84a0-42cf-99ac-fe547e004f25-kube-api-access-9n8cj\") pod \"df8602f3-84a0-42cf-99ac-fe547e004f25\" (UID: \"df8602f3-84a0-42cf-99ac-fe547e004f25\") " Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.254950 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df8602f3-84a0-42cf-99ac-fe547e004f25-catalog-content\") pod \"df8602f3-84a0-42cf-99ac-fe547e004f25\" (UID: \"df8602f3-84a0-42cf-99ac-fe547e004f25\") " Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.255073 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/177bc81c-0876-451a-8d35-76d17ae3259a-utilities\") pod \"177bc81c-0876-451a-8d35-76d17ae3259a\" (UID: \"177bc81c-0876-451a-8d35-76d17ae3259a\") " Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.255120 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df8602f3-84a0-42cf-99ac-fe547e004f25-utilities\") pod \"df8602f3-84a0-42cf-99ac-fe547e004f25\" (UID: \"df8602f3-84a0-42cf-99ac-fe547e004f25\") " Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.255174 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/177bc81c-0876-451a-8d35-76d17ae3259a-catalog-content\") pod \"177bc81c-0876-451a-8d35-76d17ae3259a\" (UID: \"177bc81c-0876-451a-8d35-76d17ae3259a\") " Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.259891 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/177bc81c-0876-451a-8d35-76d17ae3259a-utilities" (OuterVolumeSpecName: "utilities") pod "177bc81c-0876-451a-8d35-76d17ae3259a" (UID: "177bc81c-0876-451a-8d35-76d17ae3259a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.259935 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df8602f3-84a0-42cf-99ac-fe547e004f25-utilities" (OuterVolumeSpecName: "utilities") pod "df8602f3-84a0-42cf-99ac-fe547e004f25" (UID: "df8602f3-84a0-42cf-99ac-fe547e004f25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.279434 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df8602f3-84a0-42cf-99ac-fe547e004f25-kube-api-access-9n8cj" (OuterVolumeSpecName: "kube-api-access-9n8cj") pod "df8602f3-84a0-42cf-99ac-fe547e004f25" (UID: "df8602f3-84a0-42cf-99ac-fe547e004f25"). InnerVolumeSpecName "kube-api-access-9n8cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.280796 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177bc81c-0876-451a-8d35-76d17ae3259a-kube-api-access-nhtqm" (OuterVolumeSpecName: "kube-api-access-nhtqm") pod "177bc81c-0876-451a-8d35-76d17ae3259a" (UID: "177bc81c-0876-451a-8d35-76d17ae3259a"). InnerVolumeSpecName "kube-api-access-nhtqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.324788 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/177bc81c-0876-451a-8d35-76d17ae3259a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "177bc81c-0876-451a-8d35-76d17ae3259a" (UID: "177bc81c-0876-451a-8d35-76d17ae3259a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.357233 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/177bc81c-0876-451a-8d35-76d17ae3259a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.357285 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df8602f3-84a0-42cf-99ac-fe547e004f25-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.357297 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/177bc81c-0876-451a-8d35-76d17ae3259a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.357312 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhtqm\" (UniqueName: \"kubernetes.io/projected/177bc81c-0876-451a-8d35-76d17ae3259a-kube-api-access-nhtqm\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.357353 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n8cj\" (UniqueName: \"kubernetes.io/projected/df8602f3-84a0-42cf-99ac-fe547e004f25-kube-api-access-9n8cj\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.396362 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df8602f3-84a0-42cf-99ac-fe547e004f25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df8602f3-84a0-42cf-99ac-fe547e004f25" (UID: "df8602f3-84a0-42cf-99ac-fe547e004f25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:37:05 crc kubenswrapper[4937]: I0123 06:37:05.459043 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df8602f3-84a0-42cf-99ac-fe547e004f25-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:06 crc kubenswrapper[4937]: I0123 06:37:06.009183 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vmr8l" event={"ID":"177bc81c-0876-451a-8d35-76d17ae3259a","Type":"ContainerDied","Data":"9778491a4f0ef56dda6fa766ea472b7220dfd0ad0dbc51df73a5d5a4783f4e53"} Jan 23 06:37:06 crc kubenswrapper[4937]: I0123 06:37:06.009239 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vmr8l" Jan 23 06:37:06 crc kubenswrapper[4937]: I0123 06:37:06.011818 4937 scope.go:117] "RemoveContainer" containerID="e2f2b64769150876fd4f5cd8304302828445e2d1324a354ac72feac891be2345" Jan 23 06:37:06 crc kubenswrapper[4937]: I0123 06:37:06.017204 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppjrq" event={"ID":"df8602f3-84a0-42cf-99ac-fe547e004f25","Type":"ContainerDied","Data":"5254fba96f7944ddc8c1582addb8d82ad4093ff5c4dedb8fa655d721531494dc"} Jan 23 06:37:06 crc kubenswrapper[4937]: I0123 06:37:06.017370 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ppjrq" Jan 23 06:37:06 crc kubenswrapper[4937]: I0123 06:37:06.055441 4937 scope.go:117] "RemoveContainer" containerID="ac95cc929132e9a3f7ece90a4f72a5e700eeafbd7d2ba8e63f8b41720b9439e9" Jan 23 06:37:06 crc kubenswrapper[4937]: I0123 06:37:06.089459 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vmr8l"] Jan 23 06:37:06 crc kubenswrapper[4937]: I0123 06:37:06.091091 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vmr8l"] Jan 23 06:37:06 crc kubenswrapper[4937]: I0123 06:37:06.094713 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ppjrq"] Jan 23 06:37:06 crc kubenswrapper[4937]: I0123 06:37:06.094783 4937 scope.go:117] "RemoveContainer" containerID="272a6e43065fdcac90c7c67b8c07a439e8b97b1e6e16f8daad48196be9446a0e" Jan 23 06:37:06 crc kubenswrapper[4937]: I0123 06:37:06.097850 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ppjrq"] Jan 23 06:37:06 crc kubenswrapper[4937]: I0123 06:37:06.120645 4937 scope.go:117] "RemoveContainer" containerID="50ed0759a9957203b768e817445403a65e3637255645d366f8eabde26a3e3c9c" Jan 23 06:37:06 crc kubenswrapper[4937]: I0123 06:37:06.163980 4937 scope.go:117] "RemoveContainer" containerID="dac6efff1426041abaa622a97b906e9a1550e112b2fd1eef52008b0e4162de0b" Jan 23 06:37:06 crc kubenswrapper[4937]: I0123 06:37:06.210108 4937 scope.go:117] "RemoveContainer" containerID="85c1bfc21413992556b3c6739acd237837245ce230acfcf3e5df934a19cfd26e" Jan 23 06:37:06 crc kubenswrapper[4937]: I0123 06:37:06.533504 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="177bc81c-0876-451a-8d35-76d17ae3259a" path="/var/lib/kubelet/pods/177bc81c-0876-451a-8d35-76d17ae3259a/volumes" Jan 23 06:37:06 crc kubenswrapper[4937]: I0123 06:37:06.534206 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df8602f3-84a0-42cf-99ac-fe547e004f25" path="/var/lib/kubelet/pods/df8602f3-84a0-42cf-99ac-fe547e004f25/volumes" Jan 23 06:37:07 crc kubenswrapper[4937]: I0123 06:37:07.724349 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:37:07 crc kubenswrapper[4937]: I0123 06:37:07.724453 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:37:07 crc kubenswrapper[4937]: I0123 06:37:07.724539 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:37:07 crc kubenswrapper[4937]: I0123 06:37:07.725571 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 06:37:07 crc kubenswrapper[4937]: I0123 06:37:07.725805 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d" gracePeriod=600 Jan 23 06:37:08 crc kubenswrapper[4937]: I0123 06:37:08.040618 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d" exitCode=0 Jan 23 06:37:08 crc kubenswrapper[4937]: I0123 06:37:08.040683 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d"} Jan 23 06:37:08 crc kubenswrapper[4937]: I0123 06:37:08.484704 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n5jg8" Jan 23 06:37:08 crc kubenswrapper[4937]: I0123 06:37:08.485638 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n5jg8" Jan 23 06:37:08 crc kubenswrapper[4937]: I0123 06:37:08.562135 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n5jg8" Jan 23 06:37:09 crc kubenswrapper[4937]: I0123 06:37:09.051483 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"0c3fa8299bd0b6b9a8218ede66a23b2f4276f22508c1f048b3346b7c7147e467"} Jan 23 06:37:09 crc kubenswrapper[4937]: I0123 06:37:09.100849 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n5jg8" Jan 23 06:37:09 crc kubenswrapper[4937]: I0123 06:37:09.247970 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rsth7"] Jan 23 06:37:12 crc kubenswrapper[4937]: I0123 06:37:12.239145 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n5jg8"] Jan 23 06:37:12 crc kubenswrapper[4937]: I0123 06:37:12.240027 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n5jg8" podUID="82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3" containerName="registry-server" containerID="cri-o://05781add598dc5ac9a83f92c848f004318436b5792f71b9e51f62791a83fa8b3" gracePeriod=2 Jan 23 06:37:13 crc kubenswrapper[4937]: I0123 06:37:13.082767 4937 generic.go:334] "Generic (PLEG): container finished" podID="82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3" containerID="05781add598dc5ac9a83f92c848f004318436b5792f71b9e51f62791a83fa8b3" exitCode=0 Jan 23 06:37:13 crc kubenswrapper[4937]: I0123 06:37:13.082860 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n5jg8" event={"ID":"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3","Type":"ContainerDied","Data":"05781add598dc5ac9a83f92c848f004318436b5792f71b9e51f62791a83fa8b3"} Jan 23 06:37:13 crc kubenswrapper[4937]: I0123 06:37:13.216768 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n5jg8" Jan 23 06:37:13 crc kubenswrapper[4937]: I0123 06:37:13.276542 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrmxc\" (UniqueName: \"kubernetes.io/projected/82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3-kube-api-access-jrmxc\") pod \"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3\" (UID: \"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3\") " Jan 23 06:37:13 crc kubenswrapper[4937]: I0123 06:37:13.276750 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3-catalog-content\") pod \"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3\" (UID: \"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3\") " Jan 23 06:37:13 crc kubenswrapper[4937]: I0123 06:37:13.277003 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3-utilities\") pod \"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3\" (UID: \"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3\") " Jan 23 06:37:13 crc kubenswrapper[4937]: I0123 06:37:13.279150 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3-utilities" (OuterVolumeSpecName: "utilities") pod "82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3" (UID: "82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:37:13 crc kubenswrapper[4937]: I0123 06:37:13.289433 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3-kube-api-access-jrmxc" (OuterVolumeSpecName: "kube-api-access-jrmxc") pod "82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3" (UID: "82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3"). InnerVolumeSpecName "kube-api-access-jrmxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:37:13 crc kubenswrapper[4937]: I0123 06:37:13.330061 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3" (UID: "82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:37:13 crc kubenswrapper[4937]: I0123 06:37:13.378741 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:13 crc kubenswrapper[4937]: I0123 06:37:13.378808 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrmxc\" (UniqueName: \"kubernetes.io/projected/82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3-kube-api-access-jrmxc\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:13 crc kubenswrapper[4937]: I0123 06:37:13.378828 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:14 crc kubenswrapper[4937]: I0123 06:37:14.091350 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n5jg8" event={"ID":"82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3","Type":"ContainerDied","Data":"ef609b936c8bf00e7e7ee7f5e03cfd7e0410cc6e567ac1f3237426be5b945eb0"} Jan 23 06:37:14 crc kubenswrapper[4937]: I0123 06:37:14.091483 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n5jg8" Jan 23 06:37:14 crc kubenswrapper[4937]: I0123 06:37:14.091914 4937 scope.go:117] "RemoveContainer" containerID="05781add598dc5ac9a83f92c848f004318436b5792f71b9e51f62791a83fa8b3" Jan 23 06:37:14 crc kubenswrapper[4937]: I0123 06:37:14.126624 4937 scope.go:117] "RemoveContainer" containerID="6fc47931885b7cc5efe874fdfc2e57bad547dd0d10ebbf25671397bb64d735cc" Jan 23 06:37:14 crc kubenswrapper[4937]: I0123 06:37:14.135901 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n5jg8"] Jan 23 06:37:14 crc kubenswrapper[4937]: I0123 06:37:14.146827 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n5jg8"] Jan 23 06:37:14 crc kubenswrapper[4937]: I0123 06:37:14.150126 4937 scope.go:117] "RemoveContainer" containerID="d77f9c4766c01712fadbd8656d08a7eb571906da4075cc3fa57403f74908c0bd" Jan 23 06:37:14 crc kubenswrapper[4937]: I0123 06:37:14.539235 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3" path="/var/lib/kubelet/pods/82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3/volumes" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.888047 4937 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 06:37:16 crc kubenswrapper[4937]: E0123 06:37:16.888334 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3" containerName="extract-utilities" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.888349 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3" containerName="extract-utilities" Jan 23 06:37:16 crc kubenswrapper[4937]: E0123 06:37:16.888358 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177bc81c-0876-451a-8d35-76d17ae3259a" containerName="extract-utilities" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.889752 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="177bc81c-0876-451a-8d35-76d17ae3259a" containerName="extract-utilities" Jan 23 06:37:16 crc kubenswrapper[4937]: E0123 06:37:16.889836 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9ca5e0-3d00-45b5-b52d-74acee0081c9" containerName="registry-server" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.889852 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9ca5e0-3d00-45b5-b52d-74acee0081c9" containerName="registry-server" Jan 23 06:37:16 crc kubenswrapper[4937]: E0123 06:37:16.889870 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3" containerName="extract-content" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.889885 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3" containerName="extract-content" Jan 23 06:37:16 crc kubenswrapper[4937]: E0123 06:37:16.889912 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177bc81c-0876-451a-8d35-76d17ae3259a" containerName="extract-content" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.889928 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="177bc81c-0876-451a-8d35-76d17ae3259a" containerName="extract-content" Jan 23 06:37:16 crc kubenswrapper[4937]: E0123 06:37:16.889949 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177bc81c-0876-451a-8d35-76d17ae3259a" containerName="registry-server" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.889962 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="177bc81c-0876-451a-8d35-76d17ae3259a" containerName="registry-server" Jan 23 06:37:16 crc kubenswrapper[4937]: E0123 06:37:16.889977 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3" containerName="registry-server" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.889989 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3" containerName="registry-server" Jan 23 06:37:16 crc kubenswrapper[4937]: E0123 06:37:16.890005 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df8602f3-84a0-42cf-99ac-fe547e004f25" containerName="extract-content" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.890033 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="df8602f3-84a0-42cf-99ac-fe547e004f25" containerName="extract-content" Jan 23 06:37:16 crc kubenswrapper[4937]: E0123 06:37:16.890060 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df8602f3-84a0-42cf-99ac-fe547e004f25" containerName="registry-server" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.890072 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="df8602f3-84a0-42cf-99ac-fe547e004f25" containerName="registry-server" Jan 23 06:37:16 crc kubenswrapper[4937]: E0123 06:37:16.890092 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df8602f3-84a0-42cf-99ac-fe547e004f25" containerName="extract-utilities" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.890104 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="df8602f3-84a0-42cf-99ac-fe547e004f25" containerName="extract-utilities" Jan 23 06:37:16 crc kubenswrapper[4937]: E0123 06:37:16.890146 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9ca5e0-3d00-45b5-b52d-74acee0081c9" containerName="extract-utilities" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.890160 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9ca5e0-3d00-45b5-b52d-74acee0081c9" containerName="extract-utilities" Jan 23 06:37:16 crc kubenswrapper[4937]: E0123 06:37:16.890180 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9ca5e0-3d00-45b5-b52d-74acee0081c9" containerName="extract-content" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.890196 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9ca5e0-3d00-45b5-b52d-74acee0081c9" containerName="extract-content" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.890517 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="177bc81c-0876-451a-8d35-76d17ae3259a" containerName="registry-server" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.890551 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea9ca5e0-3d00-45b5-b52d-74acee0081c9" containerName="registry-server" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.890570 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="82b0e24e-d09c-45a4-95f3-d7eb89a8dfe3" containerName="registry-server" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.890629 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="df8602f3-84a0-42cf-99ac-fe547e004f25" containerName="registry-server" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.891276 4937 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.891321 4937 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.891421 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.891882 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6" gracePeriod=15 Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.892006 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84" gracePeriod=15 Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.891897 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765" gracePeriod=15 Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.892054 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76" gracePeriod=15 Jan 23 06:37:16 crc kubenswrapper[4937]: E0123 06:37:16.892030 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.892165 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 06:37:16 crc kubenswrapper[4937]: E0123 06:37:16.892186 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.892199 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 06:37:16 crc kubenswrapper[4937]: E0123 06:37:16.892207 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.892213 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 06:37:16 crc kubenswrapper[4937]: E0123 06:37:16.892228 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.892235 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 06:37:16 crc kubenswrapper[4937]: E0123 06:37:16.892243 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.892251 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 06:37:16 crc kubenswrapper[4937]: E0123 06:37:16.892262 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.892273 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 06:37:16 crc kubenswrapper[4937]: E0123 06:37:16.892284 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.892291 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.892146 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c" gracePeriod=15 Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.892392 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.892567 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.892633 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.892700 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.892724 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.892756 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 06:37:16 crc kubenswrapper[4937]: I0123 06:37:16.896781 4937 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.035380 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.035450 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.035475 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.035653 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.035721 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.035762 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.035796 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.035842 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.115135 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.116423 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.117122 4937 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765" exitCode=0 Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.117156 4937 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c" exitCode=0 Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.117163 4937 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84" exitCode=0 Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.117172 4937 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76" exitCode=2 Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.117263 4937 scope.go:117] "RemoveContainer" containerID="8b45af50a8d948e737573f546bc34883cee47d5bad3a31911ab40fc7a0491620" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.137156 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.137215 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.137246 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.137269 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.137286 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.137303 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.137340 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.137350 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.137399 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.137434 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.137469 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.137483 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.137509 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.137524 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.137618 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:17 crc kubenswrapper[4937]: I0123 06:37:17.137639 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:18 crc kubenswrapper[4937]: I0123 06:37:18.128187 4937 generic.go:334] "Generic (PLEG): container finished" podID="5ed93f4b-59c0-4118-88b4-285140369236" containerID="e9996af549fe7d4ba78c93d4b6a5eb9b7a3d160091b6f8c2160782f4b2bc872c" exitCode=0 Jan 23 06:37:18 crc kubenswrapper[4937]: I0123 06:37:18.128404 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5ed93f4b-59c0-4118-88b4-285140369236","Type":"ContainerDied","Data":"e9996af549fe7d4ba78c93d4b6a5eb9b7a3d160091b6f8c2160782f4b2bc872c"} Jan 23 06:37:18 crc kubenswrapper[4937]: I0123 06:37:18.130473 4937 status_manager.go:851] "Failed to get status for pod" podUID="5ed93f4b-59c0-4118-88b4-285140369236" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:18 crc kubenswrapper[4937]: I0123 06:37:18.136256 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.393620 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.395007 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.395775 4937 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.396073 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.396324 4937 status_manager.go:851] "Failed to get status for pod" podUID="5ed93f4b-59c0-4118-88b4-285140369236" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.396620 4937 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.397019 4937 status_manager.go:851] "Failed to get status for pod" podUID="5ed93f4b-59c0-4118-88b4-285140369236" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.488200 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ed93f4b-59c0-4118-88b4-285140369236-kube-api-access\") pod \"5ed93f4b-59c0-4118-88b4-285140369236\" (UID: \"5ed93f4b-59c0-4118-88b4-285140369236\") " Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.488267 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.488303 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ed93f4b-59c0-4118-88b4-285140369236-kubelet-dir\") pod \"5ed93f4b-59c0-4118-88b4-285140369236\" (UID: \"5ed93f4b-59c0-4118-88b4-285140369236\") " Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.488385 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.488419 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.488442 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ed93f4b-59c0-4118-88b4-285140369236-var-lock\") pod \"5ed93f4b-59c0-4118-88b4-285140369236\" (UID: \"5ed93f4b-59c0-4118-88b4-285140369236\") " Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.488473 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ed93f4b-59c0-4118-88b4-285140369236-var-lock" (OuterVolumeSpecName: "var-lock") pod "5ed93f4b-59c0-4118-88b4-285140369236" (UID: "5ed93f4b-59c0-4118-88b4-285140369236"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.488527 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.488525 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ed93f4b-59c0-4118-88b4-285140369236-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5ed93f4b-59c0-4118-88b4-285140369236" (UID: "5ed93f4b-59c0-4118-88b4-285140369236"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.488546 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.488567 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.489142 4937 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.489175 4937 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.489193 4937 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ed93f4b-59c0-4118-88b4-285140369236-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.489211 4937 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.489227 4937 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ed93f4b-59c0-4118-88b4-285140369236-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.498255 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ed93f4b-59c0-4118-88b4-285140369236-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5ed93f4b-59c0-4118-88b4-285140369236" (UID: "5ed93f4b-59c0-4118-88b4-285140369236"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:37:19 crc kubenswrapper[4937]: I0123 06:37:19.590534 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ed93f4b-59c0-4118-88b4-285140369236-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.160017 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.161484 4937 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6" exitCode=0 Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.161623 4937 scope.go:117] "RemoveContainer" containerID="8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.161992 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.163323 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5ed93f4b-59c0-4118-88b4-285140369236","Type":"ContainerDied","Data":"fd66381b75448a8612d1d0e1d0ab40955d044b905916163c914cf41ab803906a"} Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.163361 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd66381b75448a8612d1d0e1d0ab40955d044b905916163c914cf41ab803906a" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.163403 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.180732 4937 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.181310 4937 status_manager.go:851] "Failed to get status for pod" podUID="5ed93f4b-59c0-4118-88b4-285140369236" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.199669 4937 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.200196 4937 status_manager.go:851] "Failed to get status for pod" podUID="5ed93f4b-59c0-4118-88b4-285140369236" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.200962 4937 scope.go:117] "RemoveContainer" containerID="c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.233191 4937 scope.go:117] "RemoveContainer" containerID="04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.254085 4937 scope.go:117] "RemoveContainer" containerID="6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.275924 4937 scope.go:117] "RemoveContainer" containerID="53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.298470 4937 scope.go:117] "RemoveContainer" containerID="57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.319959 4937 scope.go:117] "RemoveContainer" containerID="8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765" Jan 23 06:37:20 crc kubenswrapper[4937]: E0123 06:37:20.324232 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\": container with ID starting with 8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765 not found: ID does not exist" containerID="8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.324277 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765"} err="failed to get container status \"8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\": rpc error: code = NotFound desc = could not find container \"8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765\": container with ID starting with 8b35174c0487b96bdb28acf3ab4e6822b8c78c86c99f82f0d2a19ad1faf24765 not found: ID does not exist" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.324302 4937 scope.go:117] "RemoveContainer" containerID="c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c" Jan 23 06:37:20 crc kubenswrapper[4937]: E0123 06:37:20.324826 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\": container with ID starting with c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c not found: ID does not exist" containerID="c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.324889 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c"} err="failed to get container status \"c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\": rpc error: code = NotFound desc = could not find container \"c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c\": container with ID starting with c3b497d33b9c77600530d110f4086648a09836e97f9287a38b5a66e5ddc4b37c not found: ID does not exist" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.324929 4937 scope.go:117] "RemoveContainer" containerID="04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84" Jan 23 06:37:20 crc kubenswrapper[4937]: E0123 06:37:20.325553 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\": container with ID starting with 04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84 not found: ID does not exist" containerID="04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.325585 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84"} err="failed to get container status \"04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\": rpc error: code = NotFound desc = could not find container \"04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84\": container with ID starting with 04a03829d59e78324d9b9f7973c99c96778a223076bea9245de4dbad41ed1d84 not found: ID does not exist" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.325624 4937 scope.go:117] "RemoveContainer" containerID="6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76" Jan 23 06:37:20 crc kubenswrapper[4937]: E0123 06:37:20.326198 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\": container with ID starting with 6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76 not found: ID does not exist" containerID="6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.326226 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76"} err="failed to get container status \"6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\": rpc error: code = NotFound desc = could not find container \"6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76\": container with ID starting with 6afbf04c11e5b747b17731bf1ce855d5bf025b7e7538b202585dfacc365d1b76 not found: ID does not exist" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.326244 4937 scope.go:117] "RemoveContainer" containerID="53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6" Jan 23 06:37:20 crc kubenswrapper[4937]: E0123 06:37:20.326524 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\": container with ID starting with 53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6 not found: ID does not exist" containerID="53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.326552 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6"} err="failed to get container status \"53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\": rpc error: code = NotFound desc = could not find container \"53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6\": container with ID starting with 53919b454ee1f33cf7f2ec70265bebcdcf63eaccf98469f760013f9631047eb6 not found: ID does not exist" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.326569 4937 scope.go:117] "RemoveContainer" containerID="57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa" Jan 23 06:37:20 crc kubenswrapper[4937]: E0123 06:37:20.327490 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\": container with ID starting with 57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa not found: ID does not exist" containerID="57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.327562 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa"} err="failed to get container status \"57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\": rpc error: code = NotFound desc = could not find container \"57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa\": container with ID starting with 57b58e5987da200ebe120369e05a05cd12a68b242235c9cabd9686e329fe0baa not found: ID does not exist" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.534051 4937 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.534524 4937 status_manager.go:851] "Failed to get status for pod" podUID="5ed93f4b-59c0-4118-88b4-285140369236" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:20 crc kubenswrapper[4937]: I0123 06:37:20.534724 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 23 06:37:21 crc kubenswrapper[4937]: E0123 06:37:21.290559 4937 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:21 crc kubenswrapper[4937]: E0123 06:37:21.291354 4937 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:21 crc kubenswrapper[4937]: E0123 06:37:21.291920 4937 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:21 crc kubenswrapper[4937]: E0123 06:37:21.292443 4937 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:21 crc kubenswrapper[4937]: E0123 06:37:21.293116 4937 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:21 crc kubenswrapper[4937]: I0123 06:37:21.293179 4937 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 23 06:37:21 crc kubenswrapper[4937]: E0123 06:37:21.293889 4937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="200ms" Jan 23 06:37:21 crc kubenswrapper[4937]: E0123 06:37:21.495024 4937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="400ms" Jan 23 06:37:21 crc kubenswrapper[4937]: E0123 06:37:21.896251 4937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="800ms" Jan 23 06:37:21 crc kubenswrapper[4937]: E0123 06:37:21.951518 4937 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:21 crc kubenswrapper[4937]: I0123 06:37:21.952107 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:21 crc kubenswrapper[4937]: E0123 06:37:21.977111 4937 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.64:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d48c58f4ae055 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 06:37:21.976279125 +0000 UTC m=+241.780045778,LastTimestamp:2026-01-23 06:37:21.976279125 +0000 UTC m=+241.780045778,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 06:37:22 crc kubenswrapper[4937]: I0123 06:37:22.182389 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e8ac368bfaf6967e2cc4b19e7d564c27e72c918c29224019d9bcaecaf5f19e40"} Jan 23 06:37:22 crc kubenswrapper[4937]: E0123 06:37:22.697842 4937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="1.6s" Jan 23 06:37:23 crc kubenswrapper[4937]: I0123 06:37:23.194195 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"16d213b325fd1d5d13835b0a08e10029b0c1fb4bc4c99d4ca4606e9f8ef14f9f"} Jan 23 06:37:23 crc kubenswrapper[4937]: E0123 06:37:23.195215 4937 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:23 crc kubenswrapper[4937]: I0123 06:37:23.195347 4937 status_manager.go:851] "Failed to get status for pod" podUID="5ed93f4b-59c0-4118-88b4-285140369236" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:24 crc kubenswrapper[4937]: E0123 06:37:24.201774 4937 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:37:24 crc kubenswrapper[4937]: E0123 06:37:24.313095 4937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="3.2s" Jan 23 06:37:27 crc kubenswrapper[4937]: E0123 06:37:27.514507 4937 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="6.4s" Jan 23 06:37:29 crc kubenswrapper[4937]: E0123 06:37:29.045632 4937 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.64:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d48c58f4ae055 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 06:37:21.976279125 +0000 UTC m=+241.780045778,LastTimestamp:2026-01-23 06:37:21.976279125 +0000 UTC m=+241.780045778,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 06:37:29 crc kubenswrapper[4937]: I0123 06:37:29.244770 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 06:37:29 crc kubenswrapper[4937]: I0123 06:37:29.244905 4937 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4" exitCode=1 Jan 23 06:37:29 crc kubenswrapper[4937]: I0123 06:37:29.245059 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4"} Jan 23 06:37:29 crc kubenswrapper[4937]: I0123 06:37:29.245826 4937 scope.go:117] "RemoveContainer" containerID="274226489e529ac581fcee4964a4fe235c34b97c7de10900bfa8a801beaeefe4" Jan 23 06:37:29 crc kubenswrapper[4937]: I0123 06:37:29.246511 4937 status_manager.go:851] "Failed to get status for pod" podUID="5ed93f4b-59c0-4118-88b4-285140369236" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:29 crc kubenswrapper[4937]: I0123 06:37:29.247163 4937 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:29 crc kubenswrapper[4937]: E0123 06:37:29.379819 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:37:29Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:37:29Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:37:29Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T06:37:29Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:29 crc kubenswrapper[4937]: E0123 06:37:29.381298 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:29 crc kubenswrapper[4937]: E0123 06:37:29.382202 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:29 crc kubenswrapper[4937]: E0123 06:37:29.382569 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:29 crc kubenswrapper[4937]: E0123 06:37:29.383149 4937 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:29 crc kubenswrapper[4937]: E0123 06:37:29.383190 4937 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 06:37:30 crc kubenswrapper[4937]: I0123 06:37:30.258818 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 06:37:30 crc kubenswrapper[4937]: I0123 06:37:30.258916 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4b8fa875d49a2295943b98b11072ce8793fd6fd64329bd9baf92c644fc359c2a"} Jan 23 06:37:30 crc kubenswrapper[4937]: I0123 06:37:30.260068 4937 status_manager.go:851] "Failed to get status for pod" podUID="5ed93f4b-59c0-4118-88b4-285140369236" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:30 crc kubenswrapper[4937]: I0123 06:37:30.260733 4937 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:30 crc kubenswrapper[4937]: I0123 06:37:30.530406 4937 status_manager.go:851] "Failed to get status for pod" podUID="5ed93f4b-59c0-4118-88b4-285140369236" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:30 crc kubenswrapper[4937]: I0123 06:37:30.530969 4937 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:30 crc kubenswrapper[4937]: I0123 06:37:30.952531 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:37:31 crc kubenswrapper[4937]: I0123 06:37:31.526314 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:31 crc kubenswrapper[4937]: I0123 06:37:31.527521 4937 status_manager.go:851] "Failed to get status for pod" podUID="5ed93f4b-59c0-4118-88b4-285140369236" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:31 crc kubenswrapper[4937]: I0123 06:37:31.528566 4937 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:31 crc kubenswrapper[4937]: I0123 06:37:31.550851 4937 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c628f3b9-6703-42ed-9df0-2b39b603b0f8" Jan 23 06:37:31 crc kubenswrapper[4937]: I0123 06:37:31.550901 4937 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c628f3b9-6703-42ed-9df0-2b39b603b0f8" Jan 23 06:37:31 crc kubenswrapper[4937]: E0123 06:37:31.551636 4937 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:31 crc kubenswrapper[4937]: I0123 06:37:31.553082 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:31 crc kubenswrapper[4937]: W0123 06:37:31.587361 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-39b5f3427957d1f00bf717b68b826154c5e01c86d33b52744e457fd6b9032b3c WatchSource:0}: Error finding container 39b5f3427957d1f00bf717b68b826154c5e01c86d33b52744e457fd6b9032b3c: Status 404 returned error can't find the container with id 39b5f3427957d1f00bf717b68b826154c5e01c86d33b52744e457fd6b9032b3c Jan 23 06:37:32 crc kubenswrapper[4937]: I0123 06:37:32.275041 4937 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="3a7c12c3c5f6b06adf3e5c831ba012c5913727071b8167517eb22d0e9cb79e87" exitCode=0 Jan 23 06:37:32 crc kubenswrapper[4937]: I0123 06:37:32.275109 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"3a7c12c3c5f6b06adf3e5c831ba012c5913727071b8167517eb22d0e9cb79e87"} Jan 23 06:37:32 crc kubenswrapper[4937]: I0123 06:37:32.275154 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"39b5f3427957d1f00bf717b68b826154c5e01c86d33b52744e457fd6b9032b3c"} Jan 23 06:37:32 crc kubenswrapper[4937]: I0123 06:37:32.275782 4937 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c628f3b9-6703-42ed-9df0-2b39b603b0f8" Jan 23 06:37:32 crc kubenswrapper[4937]: I0123 06:37:32.275858 4937 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c628f3b9-6703-42ed-9df0-2b39b603b0f8" Jan 23 06:37:32 crc kubenswrapper[4937]: I0123 06:37:32.276254 4937 status_manager.go:851] "Failed to get status for pod" podUID="5ed93f4b-59c0-4118-88b4-285140369236" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:32 crc kubenswrapper[4937]: E0123 06:37:32.276613 4937 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:32 crc kubenswrapper[4937]: I0123 06:37:32.276960 4937 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Jan 23 06:37:33 crc kubenswrapper[4937]: I0123 06:37:33.297817 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b0b918a01852f41b68a9dcbb1644296417c3cb49dc89eba035727521400a7253"} Jan 23 06:37:33 crc kubenswrapper[4937]: I0123 06:37:33.298280 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4f014d25683c3019b36c09582c1906bea93c75b35a753de1287fd96d755b10ad"} Jan 23 06:37:33 crc kubenswrapper[4937]: I0123 06:37:33.298293 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1f8af3130e24ebbda9645d59da63ee390d32645994354299a87c85e015f15688"} Jan 23 06:37:33 crc kubenswrapper[4937]: I0123 06:37:33.303249 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:37:33 crc kubenswrapper[4937]: I0123 06:37:33.310306 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:37:34 crc kubenswrapper[4937]: I0123 06:37:34.286017 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" podUID="5faf4ca9-b656-45c0-8a88-9fc38060e5a9" containerName="oauth-openshift" containerID="cri-o://e29107b543feb2d4e0b23210ab08c44ea3027ba177e9bea2bfd400d4c92db518" gracePeriod=15 Jan 23 06:37:34 crc kubenswrapper[4937]: I0123 06:37:34.316554 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"525ed223873834a0e3306f9439ff3835f3704e8971506265f0c9e86f32584a77"} Jan 23 06:37:34 crc kubenswrapper[4937]: I0123 06:37:34.317088 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f90c9f0a5ede0b0a56726c4f0c42b411bae91b39ca0893194e39674eaf9ac32a"} Jan 23 06:37:34 crc kubenswrapper[4937]: I0123 06:37:34.316889 4937 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c628f3b9-6703-42ed-9df0-2b39b603b0f8" Jan 23 06:37:34 crc kubenswrapper[4937]: I0123 06:37:34.317257 4937 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c628f3b9-6703-42ed-9df0-2b39b603b0f8" Jan 23 06:37:34 crc kubenswrapper[4937]: I0123 06:37:34.670421 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:37:35 crc kubenswrapper[4937]: I0123 06:37:35.324175 4937 generic.go:334] "Generic (PLEG): container finished" podID="5faf4ca9-b656-45c0-8a88-9fc38060e5a9" containerID="e29107b543feb2d4e0b23210ab08c44ea3027ba177e9bea2bfd400d4c92db518" exitCode=0 Jan 23 06:37:35 crc kubenswrapper[4937]: I0123 06:37:35.324250 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" event={"ID":"5faf4ca9-b656-45c0-8a88-9fc38060e5a9","Type":"ContainerDied","Data":"e29107b543feb2d4e0b23210ab08c44ea3027ba177e9bea2bfd400d4c92db518"} Jan 23 06:37:35 crc kubenswrapper[4937]: I0123 06:37:35.324708 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" event={"ID":"5faf4ca9-b656-45c0-8a88-9fc38060e5a9","Type":"ContainerDied","Data":"2f37fff48a47624620c7712fb6e13c47257ac28b44f1eb961acfe0cb78384dea"} Jan 23 06:37:35 crc kubenswrapper[4937]: I0123 06:37:35.324740 4937 scope.go:117] "RemoveContainer" containerID="e29107b543feb2d4e0b23210ab08c44ea3027ba177e9bea2bfd400d4c92db518" Jan 23 06:37:35 crc kubenswrapper[4937]: I0123 06:37:35.324294 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rsth7" Jan 23 06:37:35 crc kubenswrapper[4937]: I0123 06:37:35.348296 4937 scope.go:117] "RemoveContainer" containerID="e29107b543feb2d4e0b23210ab08c44ea3027ba177e9bea2bfd400d4c92db518" Jan 23 06:37:35 crc kubenswrapper[4937]: E0123 06:37:35.349104 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e29107b543feb2d4e0b23210ab08c44ea3027ba177e9bea2bfd400d4c92db518\": container with ID starting with e29107b543feb2d4e0b23210ab08c44ea3027ba177e9bea2bfd400d4c92db518 not found: ID does not exist" containerID="e29107b543feb2d4e0b23210ab08c44ea3027ba177e9bea2bfd400d4c92db518" Jan 23 06:37:35 crc kubenswrapper[4937]: I0123 06:37:35.349165 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e29107b543feb2d4e0b23210ab08c44ea3027ba177e9bea2bfd400d4c92db518"} err="failed to get container status \"e29107b543feb2d4e0b23210ab08c44ea3027ba177e9bea2bfd400d4c92db518\": rpc error: code = NotFound desc = could not find container \"e29107b543feb2d4e0b23210ab08c44ea3027ba177e9bea2bfd400d4c92db518\": container with ID starting with e29107b543feb2d4e0b23210ab08c44ea3027ba177e9bea2bfd400d4c92db518 not found: ID does not exist" Jan 23 06:37:36 crc kubenswrapper[4937]: I0123 06:37:36.553243 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:36 crc kubenswrapper[4937]: I0123 06:37:36.553322 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:36 crc kubenswrapper[4937]: I0123 06:37:36.562257 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.327485 4937 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.608067 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhd65\" (UniqueName: \"kubernetes.io/projected/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-kube-api-access-jhd65\") pod \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.608631 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-template-error\") pod \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.608659 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-audit-dir\") pod \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.608692 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-router-certs\") pod \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.608722 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-serving-cert\") pod \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.608775 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5faf4ca9-b656-45c0-8a88-9fc38060e5a9" (UID: "5faf4ca9-b656-45c0-8a88-9fc38060e5a9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.608870 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-ocp-branding-template\") pod \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.608916 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-template-provider-selection\") pod \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.608936 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-template-login\") pod \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.608964 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-idp-0-file-data\") pod \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.609006 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-audit-policies\") pod \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.609057 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-service-ca\") pod \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.609083 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-trusted-ca-bundle\") pod \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.609117 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-cliconfig\") pod \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.609174 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-session\") pod \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\" (UID: \"5faf4ca9-b656-45c0-8a88-9fc38060e5a9\") " Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.609676 4937 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.610398 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5faf4ca9-b656-45c0-8a88-9fc38060e5a9" (UID: "5faf4ca9-b656-45c0-8a88-9fc38060e5a9"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.610532 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5faf4ca9-b656-45c0-8a88-9fc38060e5a9" (UID: "5faf4ca9-b656-45c0-8a88-9fc38060e5a9"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.611177 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5faf4ca9-b656-45c0-8a88-9fc38060e5a9" (UID: "5faf4ca9-b656-45c0-8a88-9fc38060e5a9"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.611881 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5faf4ca9-b656-45c0-8a88-9fc38060e5a9" (UID: "5faf4ca9-b656-45c0-8a88-9fc38060e5a9"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.620864 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5faf4ca9-b656-45c0-8a88-9fc38060e5a9" (UID: "5faf4ca9-b656-45c0-8a88-9fc38060e5a9"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.628899 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5faf4ca9-b656-45c0-8a88-9fc38060e5a9" (UID: "5faf4ca9-b656-45c0-8a88-9fc38060e5a9"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.629254 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "5faf4ca9-b656-45c0-8a88-9fc38060e5a9" (UID: "5faf4ca9-b656-45c0-8a88-9fc38060e5a9"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.629293 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-kube-api-access-jhd65" (OuterVolumeSpecName: "kube-api-access-jhd65") pod "5faf4ca9-b656-45c0-8a88-9fc38060e5a9" (UID: "5faf4ca9-b656-45c0-8a88-9fc38060e5a9"). InnerVolumeSpecName "kube-api-access-jhd65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.629978 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5faf4ca9-b656-45c0-8a88-9fc38060e5a9" (UID: "5faf4ca9-b656-45c0-8a88-9fc38060e5a9"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.630300 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5faf4ca9-b656-45c0-8a88-9fc38060e5a9" (UID: "5faf4ca9-b656-45c0-8a88-9fc38060e5a9"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.630426 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5faf4ca9-b656-45c0-8a88-9fc38060e5a9" (UID: "5faf4ca9-b656-45c0-8a88-9fc38060e5a9"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.630562 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5faf4ca9-b656-45c0-8a88-9fc38060e5a9" (UID: "5faf4ca9-b656-45c0-8a88-9fc38060e5a9"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.630856 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5faf4ca9-b656-45c0-8a88-9fc38060e5a9" (UID: "5faf4ca9-b656-45c0-8a88-9fc38060e5a9"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.710531 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.710574 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.710608 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.710621 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.710636 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhd65\" (UniqueName: \"kubernetes.io/projected/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-kube-api-access-jhd65\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.710649 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.710664 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.710676 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.710689 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.710710 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.710725 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.710736 4937 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:39 crc kubenswrapper[4937]: I0123 06:37:39.710750 4937 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5faf4ca9-b656-45c0-8a88-9fc38060e5a9-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 06:37:40 crc kubenswrapper[4937]: E0123 06:37:40.193784 4937 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 23 06:37:40 crc kubenswrapper[4937]: I0123 06:37:40.359120 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:40 crc kubenswrapper[4937]: I0123 06:37:40.359351 4937 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c628f3b9-6703-42ed-9df0-2b39b603b0f8" Jan 23 06:37:40 crc kubenswrapper[4937]: I0123 06:37:40.359411 4937 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c628f3b9-6703-42ed-9df0-2b39b603b0f8" Jan 23 06:37:40 crc kubenswrapper[4937]: I0123 06:37:40.363737 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:40 crc kubenswrapper[4937]: I0123 06:37:40.547005 4937 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0ae46a9d-70cd-469e-b9f5-7a0ef6a660a6" Jan 23 06:37:40 crc kubenswrapper[4937]: I0123 06:37:40.958300 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 06:37:41 crc kubenswrapper[4937]: I0123 06:37:41.365212 4937 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c628f3b9-6703-42ed-9df0-2b39b603b0f8" Jan 23 06:37:41 crc kubenswrapper[4937]: I0123 06:37:41.365256 4937 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c628f3b9-6703-42ed-9df0-2b39b603b0f8" Jan 23 06:37:41 crc kubenswrapper[4937]: I0123 06:37:41.370789 4937 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0ae46a9d-70cd-469e-b9f5-7a0ef6a660a6" Jan 23 06:37:49 crc kubenswrapper[4937]: I0123 06:37:49.283469 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 06:37:49 crc kubenswrapper[4937]: I0123 06:37:49.405258 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 06:37:49 crc kubenswrapper[4937]: I0123 06:37:49.654302 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 06:37:49 crc kubenswrapper[4937]: I0123 06:37:49.993066 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 06:37:50 crc kubenswrapper[4937]: I0123 06:37:50.279219 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 06:37:50 crc kubenswrapper[4937]: I0123 06:37:50.332108 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 06:37:50 crc kubenswrapper[4937]: I0123 06:37:50.355999 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 06:37:50 crc kubenswrapper[4937]: I0123 06:37:50.685195 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 06:37:50 crc kubenswrapper[4937]: I0123 06:37:50.770073 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 06:37:50 crc kubenswrapper[4937]: I0123 06:37:50.797697 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 06:37:50 crc kubenswrapper[4937]: I0123 06:37:50.836619 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 06:37:51 crc kubenswrapper[4937]: I0123 06:37:51.126653 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 06:37:51 crc kubenswrapper[4937]: I0123 06:37:51.449267 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 06:37:51 crc kubenswrapper[4937]: I0123 06:37:51.777431 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 06:37:51 crc kubenswrapper[4937]: I0123 06:37:51.780782 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 06:37:52 crc kubenswrapper[4937]: I0123 06:37:52.149837 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 06:37:52 crc kubenswrapper[4937]: I0123 06:37:52.152238 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 06:37:52 crc kubenswrapper[4937]: I0123 06:37:52.198444 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 06:37:52 crc kubenswrapper[4937]: I0123 06:37:52.204017 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 06:37:52 crc kubenswrapper[4937]: I0123 06:37:52.254859 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 06:37:52 crc kubenswrapper[4937]: I0123 06:37:52.371303 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 06:37:52 crc kubenswrapper[4937]: I0123 06:37:52.560012 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 06:37:52 crc kubenswrapper[4937]: I0123 06:37:52.591794 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 06:37:52 crc kubenswrapper[4937]: I0123 06:37:52.645226 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 06:37:52 crc kubenswrapper[4937]: I0123 06:37:52.765811 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 06:37:52 crc kubenswrapper[4937]: I0123 06:37:52.863303 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 06:37:52 crc kubenswrapper[4937]: I0123 06:37:52.871129 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 06:37:52 crc kubenswrapper[4937]: I0123 06:37:52.925792 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 06:37:52 crc kubenswrapper[4937]: I0123 06:37:52.943926 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.045069 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.208743 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.235578 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.252562 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.279193 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.322017 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.339090 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.358881 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.403048 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.414014 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.478341 4937 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.482054 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rsth7","openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.482108 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.493165 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.509848 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.509832289 podStartE2EDuration="14.509832289s" podCreationTimestamp="2026-01-23 06:37:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:37:53.506429515 +0000 UTC m=+273.310196178" watchObservedRunningTime="2026-01-23 06:37:53.509832289 +0000 UTC m=+273.313598942" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.564018 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.592059 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.683053 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.734293 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.780662 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.781277 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.805541 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.831132 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 06:37:53 crc kubenswrapper[4937]: I0123 06:37:53.969707 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.067267 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.072965 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.103409 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.135861 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.184147 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.416401 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.424213 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.495370 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.538950 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5faf4ca9-b656-45c0-8a88-9fc38060e5a9" path="/var/lib/kubelet/pods/5faf4ca9-b656-45c0-8a88-9fc38060e5a9/volumes" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.563741 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.580677 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.589981 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.642995 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.678691 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.685985 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.688878 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.710339 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.795659 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.847200 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 06:37:54 crc kubenswrapper[4937]: I0123 06:37:54.900870 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 06:37:55 crc kubenswrapper[4937]: I0123 06:37:55.045705 4937 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 06:37:55 crc kubenswrapper[4937]: I0123 06:37:55.079100 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 06:37:55 crc kubenswrapper[4937]: I0123 06:37:55.103979 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 06:37:55 crc kubenswrapper[4937]: I0123 06:37:55.213259 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 06:37:55 crc kubenswrapper[4937]: I0123 06:37:55.396816 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 06:37:55 crc kubenswrapper[4937]: I0123 06:37:55.453135 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 06:37:55 crc kubenswrapper[4937]: I0123 06:37:55.469208 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 06:37:55 crc kubenswrapper[4937]: I0123 06:37:55.520887 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 06:37:55 crc kubenswrapper[4937]: I0123 06:37:55.642771 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 06:37:55 crc kubenswrapper[4937]: I0123 06:37:55.810967 4937 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 06:37:55 crc kubenswrapper[4937]: I0123 06:37:55.853518 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 06:37:55 crc kubenswrapper[4937]: I0123 06:37:55.857692 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 06:37:55 crc kubenswrapper[4937]: I0123 06:37:55.860947 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 06:37:55 crc kubenswrapper[4937]: I0123 06:37:55.930161 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 06:37:55 crc kubenswrapper[4937]: I0123 06:37:55.939134 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 06:37:55 crc kubenswrapper[4937]: I0123 06:37:55.992929 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.120381 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.126373 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.166890 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.173683 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.175937 4937 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.195205 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.205889 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.318465 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.325756 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.331820 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.360617 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.410258 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.462076 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.533677 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.533710 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.568488 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.584688 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.593647 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.806159 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.807577 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.863163 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.880399 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.895560 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.916767 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 06:37:56 crc kubenswrapper[4937]: I0123 06:37:56.941467 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 06:37:57 crc kubenswrapper[4937]: I0123 06:37:57.019147 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 06:37:57 crc kubenswrapper[4937]: I0123 06:37:57.034780 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 06:37:57 crc kubenswrapper[4937]: I0123 06:37:57.076019 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 06:37:57 crc kubenswrapper[4937]: I0123 06:37:57.093296 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 06:37:57 crc kubenswrapper[4937]: I0123 06:37:57.122039 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 06:37:57 crc kubenswrapper[4937]: I0123 06:37:57.189959 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 06:37:57 crc kubenswrapper[4937]: I0123 06:37:57.192513 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 06:37:57 crc kubenswrapper[4937]: I0123 06:37:57.193856 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 06:37:57 crc kubenswrapper[4937]: I0123 06:37:57.257663 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 06:37:57 crc kubenswrapper[4937]: I0123 06:37:57.261444 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 06:37:57 crc kubenswrapper[4937]: I0123 06:37:57.296995 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 06:37:57 crc kubenswrapper[4937]: I0123 06:37:57.339750 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 06:37:57 crc kubenswrapper[4937]: I0123 06:37:57.442111 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 06:37:57 crc kubenswrapper[4937]: I0123 06:37:57.605533 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 06:37:57 crc kubenswrapper[4937]: I0123 06:37:57.686222 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 06:37:57 crc kubenswrapper[4937]: I0123 06:37:57.831975 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 06:37:57 crc kubenswrapper[4937]: I0123 06:37:57.862659 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 06:37:57 crc kubenswrapper[4937]: I0123 06:37:57.886199 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 06:37:57 crc kubenswrapper[4937]: I0123 06:37:57.963720 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 06:37:58 crc kubenswrapper[4937]: I0123 06:37:58.032067 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 06:37:58 crc kubenswrapper[4937]: I0123 06:37:58.330713 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 06:37:58 crc kubenswrapper[4937]: I0123 06:37:58.336172 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 06:37:58 crc kubenswrapper[4937]: I0123 06:37:58.371890 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 06:37:58 crc kubenswrapper[4937]: I0123 06:37:58.496643 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 06:37:58 crc kubenswrapper[4937]: I0123 06:37:58.532231 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 06:37:58 crc kubenswrapper[4937]: I0123 06:37:58.601032 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 06:37:58 crc kubenswrapper[4937]: I0123 06:37:58.633485 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 06:37:58 crc kubenswrapper[4937]: I0123 06:37:58.678327 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 06:37:58 crc kubenswrapper[4937]: I0123 06:37:58.720171 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 06:37:58 crc kubenswrapper[4937]: I0123 06:37:58.767114 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 06:37:58 crc kubenswrapper[4937]: I0123 06:37:58.773199 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 06:37:58 crc kubenswrapper[4937]: I0123 06:37:58.816815 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 06:37:58 crc kubenswrapper[4937]: I0123 06:37:58.875399 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 06:37:58 crc kubenswrapper[4937]: I0123 06:37:58.909313 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 06:37:59 crc kubenswrapper[4937]: I0123 06:37:59.017633 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 06:37:59 crc kubenswrapper[4937]: I0123 06:37:59.046894 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 06:37:59 crc kubenswrapper[4937]: I0123 06:37:59.114230 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 06:37:59 crc kubenswrapper[4937]: I0123 06:37:59.121683 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 06:37:59 crc kubenswrapper[4937]: I0123 06:37:59.205911 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 06:37:59 crc kubenswrapper[4937]: I0123 06:37:59.299141 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 06:37:59 crc kubenswrapper[4937]: I0123 06:37:59.375186 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 06:37:59 crc kubenswrapper[4937]: I0123 06:37:59.409756 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 06:37:59 crc kubenswrapper[4937]: I0123 06:37:59.417899 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 06:37:59 crc kubenswrapper[4937]: I0123 06:37:59.418759 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 06:37:59 crc kubenswrapper[4937]: I0123 06:37:59.425160 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 06:37:59 crc kubenswrapper[4937]: I0123 06:37:59.483731 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 06:37:59 crc kubenswrapper[4937]: I0123 06:37:59.546294 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 06:37:59 crc kubenswrapper[4937]: I0123 06:37:59.680510 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 06:37:59 crc kubenswrapper[4937]: I0123 06:37:59.744744 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 06:37:59 crc kubenswrapper[4937]: I0123 06:37:59.757504 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 06:37:59 crc kubenswrapper[4937]: I0123 06:37:59.780095 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 06:37:59 crc kubenswrapper[4937]: I0123 06:37:59.966412 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 06:37:59 crc kubenswrapper[4937]: I0123 06:37:59.991807 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 06:38:00 crc kubenswrapper[4937]: I0123 06:38:00.070342 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 06:38:00 crc kubenswrapper[4937]: I0123 06:38:00.144276 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 06:38:00 crc kubenswrapper[4937]: I0123 06:38:00.175395 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 06:38:00 crc kubenswrapper[4937]: I0123 06:38:00.255201 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 06:38:00 crc kubenswrapper[4937]: I0123 06:38:00.378874 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 06:38:00 crc kubenswrapper[4937]: I0123 06:38:00.488788 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 06:38:00 crc kubenswrapper[4937]: I0123 06:38:00.537898 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 06:38:00 crc kubenswrapper[4937]: I0123 06:38:00.777993 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 06:38:00 crc kubenswrapper[4937]: I0123 06:38:00.813044 4937 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 06:38:00 crc kubenswrapper[4937]: I0123 06:38:00.822062 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 06:38:00 crc kubenswrapper[4937]: I0123 06:38:00.825644 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 06:38:01 crc kubenswrapper[4937]: I0123 06:38:01.016469 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 06:38:01 crc kubenswrapper[4937]: I0123 06:38:01.114184 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 06:38:01 crc kubenswrapper[4937]: I0123 06:38:01.192669 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 06:38:01 crc kubenswrapper[4937]: I0123 06:38:01.263823 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 06:38:01 crc kubenswrapper[4937]: I0123 06:38:01.328961 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 06:38:01 crc kubenswrapper[4937]: I0123 06:38:01.341334 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 06:38:01 crc kubenswrapper[4937]: I0123 06:38:01.391858 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 06:38:01 crc kubenswrapper[4937]: I0123 06:38:01.481368 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 06:38:01 crc kubenswrapper[4937]: I0123 06:38:01.731690 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 06:38:01 crc kubenswrapper[4937]: I0123 06:38:01.762354 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 06:38:01 crc kubenswrapper[4937]: I0123 06:38:01.766411 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 06:38:01 crc kubenswrapper[4937]: I0123 06:38:01.794975 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 06:38:01 crc kubenswrapper[4937]: I0123 06:38:01.806633 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 06:38:01 crc kubenswrapper[4937]: I0123 06:38:01.889727 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 06:38:01 crc kubenswrapper[4937]: I0123 06:38:01.933690 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 06:38:01 crc kubenswrapper[4937]: I0123 06:38:01.940660 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 06:38:01 crc kubenswrapper[4937]: I0123 06:38:01.992726 4937 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 06:38:01 crc kubenswrapper[4937]: I0123 06:38:01.993194 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://16d213b325fd1d5d13835b0a08e10029b0c1fb4bc4c99d4ca4606e9f8ef14f9f" gracePeriod=5 Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.007471 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.063489 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.119804 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.167972 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.173856 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.234945 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.241930 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.291781 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.296066 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.354555 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.396658 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.492021 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.512785 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.563541 4937 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.607879 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.642055 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.681944 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.834396 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.879028 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7f54ff7574-q8qbx"] Jan 23 06:38:02 crc kubenswrapper[4937]: E0123 06:38:02.879366 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ed93f4b-59c0-4118-88b4-285140369236" containerName="installer" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.879387 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ed93f4b-59c0-4118-88b4-285140369236" containerName="installer" Jan 23 06:38:02 crc kubenswrapper[4937]: E0123 06:38:02.879421 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5faf4ca9-b656-45c0-8a88-9fc38060e5a9" containerName="oauth-openshift" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.879434 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5faf4ca9-b656-45c0-8a88-9fc38060e5a9" containerName="oauth-openshift" Jan 23 06:38:02 crc kubenswrapper[4937]: E0123 06:38:02.879452 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.879465 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.879663 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.879940 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="5faf4ca9-b656-45c0-8a88-9fc38060e5a9" containerName="oauth-openshift" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.880019 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.880039 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ed93f4b-59c0-4118-88b4-285140369236" containerName="installer" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.880790 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.885464 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.885549 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.885971 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.889823 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.889995 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.890288 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.890290 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.889954 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.889926 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.890396 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.890790 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.901714 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f54ff7574-q8qbx"] Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.905758 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.907214 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.914547 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.919099 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 06:38:02 crc kubenswrapper[4937]: I0123 06:38:02.933522 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.043379 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.043482 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/051138ef-3bc6-477c-bdf4-0849b0b10099-audit-dir\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.043554 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/051138ef-3bc6-477c-bdf4-0849b0b10099-audit-policies\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.043637 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.043678 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.043715 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.043766 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.043829 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.043883 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9kjp\" (UniqueName: \"kubernetes.io/projected/051138ef-3bc6-477c-bdf4-0849b0b10099-kube-api-access-f9kjp\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.043989 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.044082 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-user-template-login\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.044153 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.044212 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-user-template-error\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.044272 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-session\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.052191 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.141430 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.145985 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.146047 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-user-template-error\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.146100 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-session\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.146157 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.146212 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/051138ef-3bc6-477c-bdf4-0849b0b10099-audit-dir\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.146265 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/051138ef-3bc6-477c-bdf4-0849b0b10099-audit-policies\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.146305 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.146340 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.146381 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.146431 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.146478 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.146516 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9kjp\" (UniqueName: \"kubernetes.io/projected/051138ef-3bc6-477c-bdf4-0849b0b10099-kube-api-access-f9kjp\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.146555 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.146626 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-user-template-login\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.147533 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/051138ef-3bc6-477c-bdf4-0849b0b10099-audit-dir\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.147859 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.148625 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/051138ef-3bc6-477c-bdf4-0849b0b10099-audit-policies\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.148830 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.148852 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.156333 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.156530 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.156708 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-user-template-error\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.156968 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-session\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.171050 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-user-template-login\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.171818 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9kjp\" (UniqueName: \"kubernetes.io/projected/051138ef-3bc6-477c-bdf4-0849b0b10099-kube-api-access-f9kjp\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.176567 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.177109 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.177536 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/051138ef-3bc6-477c-bdf4-0849b0b10099-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f54ff7574-q8qbx\" (UID: \"051138ef-3bc6-477c-bdf4-0849b0b10099\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.223504 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.337184 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.374035 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.391232 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.408430 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.612951 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.628453 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.634424 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.634647 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.645978 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.696828 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f54ff7574-q8qbx"] Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.758189 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.777784 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 06:38:03 crc kubenswrapper[4937]: I0123 06:38:03.779435 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 06:38:04 crc kubenswrapper[4937]: I0123 06:38:04.208630 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 06:38:04 crc kubenswrapper[4937]: I0123 06:38:04.254837 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 06:38:04 crc kubenswrapper[4937]: I0123 06:38:04.409784 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 06:38:04 crc kubenswrapper[4937]: I0123 06:38:04.442735 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 06:38:04 crc kubenswrapper[4937]: I0123 06:38:04.545413 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" event={"ID":"051138ef-3bc6-477c-bdf4-0849b0b10099","Type":"ContainerStarted","Data":"40ea9a08011bcb0afdbbbb5fc8bae1c1d7013b83df0177a585a4de63052d7a76"} Jan 23 06:38:04 crc kubenswrapper[4937]: I0123 06:38:04.545525 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" event={"ID":"051138ef-3bc6-477c-bdf4-0849b0b10099","Type":"ContainerStarted","Data":"51c69a6fc770fa6d66f704856b86bcacf59336aea00e25d05b4b504232ca9681"} Jan 23 06:38:04 crc kubenswrapper[4937]: I0123 06:38:04.546349 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:04 crc kubenswrapper[4937]: I0123 06:38:04.556091 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" Jan 23 06:38:04 crc kubenswrapper[4937]: I0123 06:38:04.572689 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7f54ff7574-q8qbx" podStartSLOduration=55.572664329 podStartE2EDuration="55.572664329s" podCreationTimestamp="2026-01-23 06:37:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:38:04.571680411 +0000 UTC m=+284.375447134" watchObservedRunningTime="2026-01-23 06:38:04.572664329 +0000 UTC m=+284.376431002" Jan 23 06:38:04 crc kubenswrapper[4937]: I0123 06:38:04.927940 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 06:38:04 crc kubenswrapper[4937]: I0123 06:38:04.936824 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 06:38:04 crc kubenswrapper[4937]: I0123 06:38:04.969483 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 06:38:05 crc kubenswrapper[4937]: I0123 06:38:05.036213 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 06:38:05 crc kubenswrapper[4937]: I0123 06:38:05.723343 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 06:38:05 crc kubenswrapper[4937]: I0123 06:38:05.909327 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 06:38:06 crc kubenswrapper[4937]: I0123 06:38:06.577988 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 06:38:06 crc kubenswrapper[4937]: I0123 06:38:06.630052 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.570941 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.571995 4937 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="16d213b325fd1d5d13835b0a08e10029b0c1fb4bc4c99d4ca4606e9f8ef14f9f" exitCode=137 Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.572109 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8ac368bfaf6967e2cc4b19e7d564c27e72c918c29224019d9bcaecaf5f19e40" Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.603961 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.604113 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.621493 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.621583 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.621651 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.621741 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.621770 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.622105 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.622159 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.622487 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.622487 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.638432 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.722980 4937 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.723028 4937 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.723040 4937 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.723052 4937 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:07 crc kubenswrapper[4937]: I0123 06:38:07.723067 4937 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:08 crc kubenswrapper[4937]: I0123 06:38:08.537657 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 23 06:38:08 crc kubenswrapper[4937]: I0123 06:38:08.578572 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 06:38:18 crc kubenswrapper[4937]: I0123 06:38:18.655502 4937 generic.go:334] "Generic (PLEG): container finished" podID="dda6ba11-61ab-4501-afa3-5bb654f352ea" containerID="8472eb38158528f591e151fb55471c861b866310e1c09e0a8ead7e695c108f56" exitCode=0 Jan 23 06:38:18 crc kubenswrapper[4937]: I0123 06:38:18.655611 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" event={"ID":"dda6ba11-61ab-4501-afa3-5bb654f352ea","Type":"ContainerDied","Data":"8472eb38158528f591e151fb55471c861b866310e1c09e0a8ead7e695c108f56"} Jan 23 06:38:18 crc kubenswrapper[4937]: I0123 06:38:18.657265 4937 scope.go:117] "RemoveContainer" containerID="8472eb38158528f591e151fb55471c861b866310e1c09e0a8ead7e695c108f56" Jan 23 06:38:19 crc kubenswrapper[4937]: I0123 06:38:19.666941 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" event={"ID":"dda6ba11-61ab-4501-afa3-5bb654f352ea","Type":"ContainerStarted","Data":"6ddc27e9c0ae32be0e98701b084848dac66a8a2bad106a76e4f2c97fd6bce4a5"} Jan 23 06:38:19 crc kubenswrapper[4937]: I0123 06:38:19.668409 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" Jan 23 06:38:19 crc kubenswrapper[4937]: I0123 06:38:19.669167 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" Jan 23 06:38:20 crc kubenswrapper[4937]: I0123 06:38:20.371901 4937 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.165100 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g9qqj"] Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.166163 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" podUID="8bda434a-853e-4281-9e1b-1d79f81f6856" containerName="controller-manager" containerID="cri-o://bf91efd215656bfe1972f9cad504ffd41f10506d0032378d6e13bdb48aba6f73" gracePeriod=30 Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.264507 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7"] Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.264820 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" podUID="863e1fad-048e-4104-aa38-ca05ffec260a" containerName="route-controller-manager" containerID="cri-o://cb234850be169a8a9756978094fab729b5502623075e87aa57e39f2ce653c98a" gracePeriod=30 Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.509829 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.574756 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.703271 4937 generic.go:334] "Generic (PLEG): container finished" podID="863e1fad-048e-4104-aa38-ca05ffec260a" containerID="cb234850be169a8a9756978094fab729b5502623075e87aa57e39f2ce653c98a" exitCode=0 Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.703341 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" event={"ID":"863e1fad-048e-4104-aa38-ca05ffec260a","Type":"ContainerDied","Data":"cb234850be169a8a9756978094fab729b5502623075e87aa57e39f2ce653c98a"} Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.703373 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" event={"ID":"863e1fad-048e-4104-aa38-ca05ffec260a","Type":"ContainerDied","Data":"dfd396ae0ccfb7502208a66650eac2f4237a7173b500717bd46f64cdfce98c79"} Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.703393 4937 scope.go:117] "RemoveContainer" containerID="cb234850be169a8a9756978094fab729b5502623075e87aa57e39f2ce653c98a" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.703870 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/863e1fad-048e-4104-aa38-ca05ffec260a-client-ca\") pod \"863e1fad-048e-4104-aa38-ca05ffec260a\" (UID: \"863e1fad-048e-4104-aa38-ca05ffec260a\") " Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.703908 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bda434a-853e-4281-9e1b-1d79f81f6856-config\") pod \"8bda434a-853e-4281-9e1b-1d79f81f6856\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.703937 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hn6p\" (UniqueName: \"kubernetes.io/projected/863e1fad-048e-4104-aa38-ca05ffec260a-kube-api-access-2hn6p\") pod \"863e1fad-048e-4104-aa38-ca05ffec260a\" (UID: \"863e1fad-048e-4104-aa38-ca05ffec260a\") " Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.704015 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863e1fad-048e-4104-aa38-ca05ffec260a-serving-cert\") pod \"863e1fad-048e-4104-aa38-ca05ffec260a\" (UID: \"863e1fad-048e-4104-aa38-ca05ffec260a\") " Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.704053 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bda434a-853e-4281-9e1b-1d79f81f6856-proxy-ca-bundles\") pod \"8bda434a-853e-4281-9e1b-1d79f81f6856\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.704095 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863e1fad-048e-4104-aa38-ca05ffec260a-config\") pod \"863e1fad-048e-4104-aa38-ca05ffec260a\" (UID: \"863e1fad-048e-4104-aa38-ca05ffec260a\") " Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.704116 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bda434a-853e-4281-9e1b-1d79f81f6856-client-ca\") pod \"8bda434a-853e-4281-9e1b-1d79f81f6856\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.704170 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxwx2\" (UniqueName: \"kubernetes.io/projected/8bda434a-853e-4281-9e1b-1d79f81f6856-kube-api-access-pxwx2\") pod \"8bda434a-853e-4281-9e1b-1d79f81f6856\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.704222 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bda434a-853e-4281-9e1b-1d79f81f6856-serving-cert\") pod \"8bda434a-853e-4281-9e1b-1d79f81f6856\" (UID: \"8bda434a-853e-4281-9e1b-1d79f81f6856\") " Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.704740 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863e1fad-048e-4104-aa38-ca05ffec260a-client-ca" (OuterVolumeSpecName: "client-ca") pod "863e1fad-048e-4104-aa38-ca05ffec260a" (UID: "863e1fad-048e-4104-aa38-ca05ffec260a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.705042 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863e1fad-048e-4104-aa38-ca05ffec260a-config" (OuterVolumeSpecName: "config") pod "863e1fad-048e-4104-aa38-ca05ffec260a" (UID: "863e1fad-048e-4104-aa38-ca05ffec260a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.705068 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bda434a-853e-4281-9e1b-1d79f81f6856-config" (OuterVolumeSpecName: "config") pod "8bda434a-853e-4281-9e1b-1d79f81f6856" (UID: "8bda434a-853e-4281-9e1b-1d79f81f6856"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.703405 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.706130 4937 generic.go:334] "Generic (PLEG): container finished" podID="8bda434a-853e-4281-9e1b-1d79f81f6856" containerID="bf91efd215656bfe1972f9cad504ffd41f10506d0032378d6e13bdb48aba6f73" exitCode=0 Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.706165 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" event={"ID":"8bda434a-853e-4281-9e1b-1d79f81f6856","Type":"ContainerDied","Data":"bf91efd215656bfe1972f9cad504ffd41f10506d0032378d6e13bdb48aba6f73"} Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.706233 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" event={"ID":"8bda434a-853e-4281-9e1b-1d79f81f6856","Type":"ContainerDied","Data":"075027acb379c0c88b278018b7697aa5bb51c196eec7164669426b9741e12aed"} Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.706309 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-g9qqj" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.706400 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bda434a-853e-4281-9e1b-1d79f81f6856-client-ca" (OuterVolumeSpecName: "client-ca") pod "8bda434a-853e-4281-9e1b-1d79f81f6856" (UID: "8bda434a-853e-4281-9e1b-1d79f81f6856"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.706725 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bda434a-853e-4281-9e1b-1d79f81f6856-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8bda434a-853e-4281-9e1b-1d79f81f6856" (UID: "8bda434a-853e-4281-9e1b-1d79f81f6856"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.710604 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863e1fad-048e-4104-aa38-ca05ffec260a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "863e1fad-048e-4104-aa38-ca05ffec260a" (UID: "863e1fad-048e-4104-aa38-ca05ffec260a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.712570 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bda434a-853e-4281-9e1b-1d79f81f6856-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8bda434a-853e-4281-9e1b-1d79f81f6856" (UID: "8bda434a-853e-4281-9e1b-1d79f81f6856"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.712767 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bda434a-853e-4281-9e1b-1d79f81f6856-kube-api-access-pxwx2" (OuterVolumeSpecName: "kube-api-access-pxwx2") pod "8bda434a-853e-4281-9e1b-1d79f81f6856" (UID: "8bda434a-853e-4281-9e1b-1d79f81f6856"). InnerVolumeSpecName "kube-api-access-pxwx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.713296 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863e1fad-048e-4104-aa38-ca05ffec260a-kube-api-access-2hn6p" (OuterVolumeSpecName: "kube-api-access-2hn6p") pod "863e1fad-048e-4104-aa38-ca05ffec260a" (UID: "863e1fad-048e-4104-aa38-ca05ffec260a"). InnerVolumeSpecName "kube-api-access-2hn6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.726340 4937 scope.go:117] "RemoveContainer" containerID="cb234850be169a8a9756978094fab729b5502623075e87aa57e39f2ce653c98a" Jan 23 06:38:25 crc kubenswrapper[4937]: E0123 06:38:25.727268 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb234850be169a8a9756978094fab729b5502623075e87aa57e39f2ce653c98a\": container with ID starting with cb234850be169a8a9756978094fab729b5502623075e87aa57e39f2ce653c98a not found: ID does not exist" containerID="cb234850be169a8a9756978094fab729b5502623075e87aa57e39f2ce653c98a" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.727314 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb234850be169a8a9756978094fab729b5502623075e87aa57e39f2ce653c98a"} err="failed to get container status \"cb234850be169a8a9756978094fab729b5502623075e87aa57e39f2ce653c98a\": rpc error: code = NotFound desc = could not find container \"cb234850be169a8a9756978094fab729b5502623075e87aa57e39f2ce653c98a\": container with ID starting with cb234850be169a8a9756978094fab729b5502623075e87aa57e39f2ce653c98a not found: ID does not exist" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.727350 4937 scope.go:117] "RemoveContainer" containerID="bf91efd215656bfe1972f9cad504ffd41f10506d0032378d6e13bdb48aba6f73" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.752464 4937 scope.go:117] "RemoveContainer" containerID="bf91efd215656bfe1972f9cad504ffd41f10506d0032378d6e13bdb48aba6f73" Jan 23 06:38:25 crc kubenswrapper[4937]: E0123 06:38:25.753035 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf91efd215656bfe1972f9cad504ffd41f10506d0032378d6e13bdb48aba6f73\": container with ID starting with bf91efd215656bfe1972f9cad504ffd41f10506d0032378d6e13bdb48aba6f73 not found: ID does not exist" containerID="bf91efd215656bfe1972f9cad504ffd41f10506d0032378d6e13bdb48aba6f73" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.753071 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf91efd215656bfe1972f9cad504ffd41f10506d0032378d6e13bdb48aba6f73"} err="failed to get container status \"bf91efd215656bfe1972f9cad504ffd41f10506d0032378d6e13bdb48aba6f73\": rpc error: code = NotFound desc = could not find container \"bf91efd215656bfe1972f9cad504ffd41f10506d0032378d6e13bdb48aba6f73\": container with ID starting with bf91efd215656bfe1972f9cad504ffd41f10506d0032378d6e13bdb48aba6f73 not found: ID does not exist" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.806104 4937 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bda434a-853e-4281-9e1b-1d79f81f6856-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.806152 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863e1fad-048e-4104-aa38-ca05ffec260a-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.806166 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxwx2\" (UniqueName: \"kubernetes.io/projected/8bda434a-853e-4281-9e1b-1d79f81f6856-kube-api-access-pxwx2\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.806179 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bda434a-853e-4281-9e1b-1d79f81f6856-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.806190 4937 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/863e1fad-048e-4104-aa38-ca05ffec260a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.806200 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bda434a-853e-4281-9e1b-1d79f81f6856-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.806215 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hn6p\" (UniqueName: \"kubernetes.io/projected/863e1fad-048e-4104-aa38-ca05ffec260a-kube-api-access-2hn6p\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.806226 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863e1fad-048e-4104-aa38-ca05ffec260a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:25 crc kubenswrapper[4937]: I0123 06:38:25.806238 4937 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bda434a-853e-4281-9e1b-1d79f81f6856-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.041903 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7"] Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.047513 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ds4g7"] Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.060205 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g9qqj"] Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.065122 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-g9qqj"] Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.537501 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="863e1fad-048e-4104-aa38-ca05ffec260a" path="/var/lib/kubelet/pods/863e1fad-048e-4104-aa38-ca05ffec260a/volumes" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.538675 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bda434a-853e-4281-9e1b-1d79f81f6856" path="/var/lib/kubelet/pods/8bda434a-853e-4281-9e1b-1d79f81f6856/volumes" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.885946 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54d6d9f464-f66ss"] Jan 23 06:38:26 crc kubenswrapper[4937]: E0123 06:38:26.886286 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bda434a-853e-4281-9e1b-1d79f81f6856" containerName="controller-manager" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.886302 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bda434a-853e-4281-9e1b-1d79f81f6856" containerName="controller-manager" Jan 23 06:38:26 crc kubenswrapper[4937]: E0123 06:38:26.886320 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863e1fad-048e-4104-aa38-ca05ffec260a" containerName="route-controller-manager" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.886327 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="863e1fad-048e-4104-aa38-ca05ffec260a" containerName="route-controller-manager" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.886464 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="863e1fad-048e-4104-aa38-ca05ffec260a" containerName="route-controller-manager" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.886477 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bda434a-853e-4281-9e1b-1d79f81f6856" containerName="controller-manager" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.888148 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.890160 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.891153 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.891286 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.891358 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.891493 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.894018 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.900870 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx"] Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.902937 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.906451 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.906753 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.907080 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.908988 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.909858 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.910192 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.910366 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.926723 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx"] Jan 23 06:38:26 crc kubenswrapper[4937]: I0123 06:38:26.935217 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54d6d9f464-f66ss"] Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.021029 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-client-ca\") pod \"controller-manager-54d6d9f464-f66ss\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.021108 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-serving-cert\") pod \"controller-manager-54d6d9f464-f66ss\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.021160 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxvc7\" (UniqueName: \"kubernetes.io/projected/45d717cb-e1e2-4d86-b662-14eba800a001-kube-api-access-rxvc7\") pod \"route-controller-manager-5b6596f69-sdfgx\" (UID: \"45d717cb-e1e2-4d86-b662-14eba800a001\") " pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.021190 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hhlx\" (UniqueName: \"kubernetes.io/projected/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-kube-api-access-6hhlx\") pod \"controller-manager-54d6d9f464-f66ss\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.021217 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-proxy-ca-bundles\") pod \"controller-manager-54d6d9f464-f66ss\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.021514 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45d717cb-e1e2-4d86-b662-14eba800a001-client-ca\") pod \"route-controller-manager-5b6596f69-sdfgx\" (UID: \"45d717cb-e1e2-4d86-b662-14eba800a001\") " pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.021682 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45d717cb-e1e2-4d86-b662-14eba800a001-serving-cert\") pod \"route-controller-manager-5b6596f69-sdfgx\" (UID: \"45d717cb-e1e2-4d86-b662-14eba800a001\") " pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.021760 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d717cb-e1e2-4d86-b662-14eba800a001-config\") pod \"route-controller-manager-5b6596f69-sdfgx\" (UID: \"45d717cb-e1e2-4d86-b662-14eba800a001\") " pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.021825 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-config\") pod \"controller-manager-54d6d9f464-f66ss\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.123054 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45d717cb-e1e2-4d86-b662-14eba800a001-client-ca\") pod \"route-controller-manager-5b6596f69-sdfgx\" (UID: \"45d717cb-e1e2-4d86-b662-14eba800a001\") " pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.123114 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45d717cb-e1e2-4d86-b662-14eba800a001-serving-cert\") pod \"route-controller-manager-5b6596f69-sdfgx\" (UID: \"45d717cb-e1e2-4d86-b662-14eba800a001\") " pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.123162 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d717cb-e1e2-4d86-b662-14eba800a001-config\") pod \"route-controller-manager-5b6596f69-sdfgx\" (UID: \"45d717cb-e1e2-4d86-b662-14eba800a001\") " pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.123188 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-config\") pod \"controller-manager-54d6d9f464-f66ss\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.123216 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-client-ca\") pod \"controller-manager-54d6d9f464-f66ss\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.123272 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-serving-cert\") pod \"controller-manager-54d6d9f464-f66ss\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.123322 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxvc7\" (UniqueName: \"kubernetes.io/projected/45d717cb-e1e2-4d86-b662-14eba800a001-kube-api-access-rxvc7\") pod \"route-controller-manager-5b6596f69-sdfgx\" (UID: \"45d717cb-e1e2-4d86-b662-14eba800a001\") " pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.123351 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hhlx\" (UniqueName: \"kubernetes.io/projected/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-kube-api-access-6hhlx\") pod \"controller-manager-54d6d9f464-f66ss\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.123372 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-proxy-ca-bundles\") pod \"controller-manager-54d6d9f464-f66ss\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.124468 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-client-ca\") pod \"controller-manager-54d6d9f464-f66ss\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.124626 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45d717cb-e1e2-4d86-b662-14eba800a001-client-ca\") pod \"route-controller-manager-5b6596f69-sdfgx\" (UID: \"45d717cb-e1e2-4d86-b662-14eba800a001\") " pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.124717 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-proxy-ca-bundles\") pod \"controller-manager-54d6d9f464-f66ss\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.124968 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d717cb-e1e2-4d86-b662-14eba800a001-config\") pod \"route-controller-manager-5b6596f69-sdfgx\" (UID: \"45d717cb-e1e2-4d86-b662-14eba800a001\") " pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.125099 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-config\") pod \"controller-manager-54d6d9f464-f66ss\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.130324 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-serving-cert\") pod \"controller-manager-54d6d9f464-f66ss\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.131051 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45d717cb-e1e2-4d86-b662-14eba800a001-serving-cert\") pod \"route-controller-manager-5b6596f69-sdfgx\" (UID: \"45d717cb-e1e2-4d86-b662-14eba800a001\") " pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.153105 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxvc7\" (UniqueName: \"kubernetes.io/projected/45d717cb-e1e2-4d86-b662-14eba800a001-kube-api-access-rxvc7\") pod \"route-controller-manager-5b6596f69-sdfgx\" (UID: \"45d717cb-e1e2-4d86-b662-14eba800a001\") " pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.153939 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hhlx\" (UniqueName: \"kubernetes.io/projected/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-kube-api-access-6hhlx\") pod \"controller-manager-54d6d9f464-f66ss\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.244920 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.253866 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.278435 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54d6d9f464-f66ss"] Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.320279 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx"] Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.603269 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54d6d9f464-f66ss"] Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.742112 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" event={"ID":"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea","Type":"ContainerStarted","Data":"ff537dd9879aa8199702dd5e646b0bad869f1f6972150e492f2da3bc47ed989e"} Jan 23 06:38:27 crc kubenswrapper[4937]: I0123 06:38:27.763354 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx"] Jan 23 06:38:27 crc kubenswrapper[4937]: W0123 06:38:27.769725 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45d717cb_e1e2_4d86_b662_14eba800a001.slice/crio-18c1f46bc93459bbeb9aeb2694bbbdb8f1f736ad70bb144f9a8ec160e7dfb38c WatchSource:0}: Error finding container 18c1f46bc93459bbeb9aeb2694bbbdb8f1f736ad70bb144f9a8ec160e7dfb38c: Status 404 returned error can't find the container with id 18c1f46bc93459bbeb9aeb2694bbbdb8f1f736ad70bb144f9a8ec160e7dfb38c Jan 23 06:38:28 crc kubenswrapper[4937]: I0123 06:38:28.752151 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" event={"ID":"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea","Type":"ContainerStarted","Data":"e03f98e5eda3e475d9cbbd16e5d3aa60c15dae54b7364ef98985176baf814656"} Jan 23 06:38:28 crc kubenswrapper[4937]: I0123 06:38:28.752262 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" podUID="9bb6bfa7-a3fd-409d-b3e9-fce93104ecea" containerName="controller-manager" containerID="cri-o://e03f98e5eda3e475d9cbbd16e5d3aa60c15dae54b7364ef98985176baf814656" gracePeriod=30 Jan 23 06:38:28 crc kubenswrapper[4937]: I0123 06:38:28.752693 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:28 crc kubenswrapper[4937]: I0123 06:38:28.755350 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" event={"ID":"45d717cb-e1e2-4d86-b662-14eba800a001","Type":"ContainerStarted","Data":"833cdc6a747c46f4e5a38eaf480aa2f52793d76aa8fbabcdc2898de0bceea909"} Jan 23 06:38:28 crc kubenswrapper[4937]: I0123 06:38:28.755377 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" event={"ID":"45d717cb-e1e2-4d86-b662-14eba800a001","Type":"ContainerStarted","Data":"18c1f46bc93459bbeb9aeb2694bbbdb8f1f736ad70bb144f9a8ec160e7dfb38c"} Jan 23 06:38:28 crc kubenswrapper[4937]: I0123 06:38:28.755454 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" podUID="45d717cb-e1e2-4d86-b662-14eba800a001" containerName="route-controller-manager" containerID="cri-o://833cdc6a747c46f4e5a38eaf480aa2f52793d76aa8fbabcdc2898de0bceea909" gracePeriod=30 Jan 23 06:38:28 crc kubenswrapper[4937]: I0123 06:38:28.755803 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" Jan 23 06:38:28 crc kubenswrapper[4937]: I0123 06:38:28.763940 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:28 crc kubenswrapper[4937]: I0123 06:38:28.764874 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" Jan 23 06:38:28 crc kubenswrapper[4937]: I0123 06:38:28.788346 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" podStartSLOduration=3.78831735 podStartE2EDuration="3.78831735s" podCreationTimestamp="2026-01-23 06:38:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:38:28.783735914 +0000 UTC m=+308.587502577" watchObservedRunningTime="2026-01-23 06:38:28.78831735 +0000 UTC m=+308.592084023" Jan 23 06:38:28 crc kubenswrapper[4937]: I0123 06:38:28.817684 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" podStartSLOduration=3.817659022 podStartE2EDuration="3.817659022s" podCreationTimestamp="2026-01-23 06:38:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:38:28.801930487 +0000 UTC m=+308.605697180" watchObservedRunningTime="2026-01-23 06:38:28.817659022 +0000 UTC m=+308.621425695" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.141543 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.141920 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.171685 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58498d946f-zh2tv"] Jan 23 06:38:29 crc kubenswrapper[4937]: E0123 06:38:29.171937 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bb6bfa7-a3fd-409d-b3e9-fce93104ecea" containerName="controller-manager" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.171951 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb6bfa7-a3fd-409d-b3e9-fce93104ecea" containerName="controller-manager" Jan 23 06:38:29 crc kubenswrapper[4937]: E0123 06:38:29.171962 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d717cb-e1e2-4d86-b662-14eba800a001" containerName="route-controller-manager" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.171968 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d717cb-e1e2-4d86-b662-14eba800a001" containerName="route-controller-manager" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.172056 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d717cb-e1e2-4d86-b662-14eba800a001" containerName="route-controller-manager" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.172069 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bb6bfa7-a3fd-409d-b3e9-fce93104ecea" containerName="controller-manager" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.172470 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.187939 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58498d946f-zh2tv"] Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.266490 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d717cb-e1e2-4d86-b662-14eba800a001-config\") pod \"45d717cb-e1e2-4d86-b662-14eba800a001\" (UID: \"45d717cb-e1e2-4d86-b662-14eba800a001\") " Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.266547 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-serving-cert\") pod \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.266577 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45d717cb-e1e2-4d86-b662-14eba800a001-client-ca\") pod \"45d717cb-e1e2-4d86-b662-14eba800a001\" (UID: \"45d717cb-e1e2-4d86-b662-14eba800a001\") " Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.266612 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxvc7\" (UniqueName: \"kubernetes.io/projected/45d717cb-e1e2-4d86-b662-14eba800a001-kube-api-access-rxvc7\") pod \"45d717cb-e1e2-4d86-b662-14eba800a001\" (UID: \"45d717cb-e1e2-4d86-b662-14eba800a001\") " Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.266633 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-client-ca\") pod \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.266669 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-proxy-ca-bundles\") pod \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.266685 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hhlx\" (UniqueName: \"kubernetes.io/projected/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-kube-api-access-6hhlx\") pod \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.266703 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-config\") pod \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\" (UID: \"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea\") " Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.266725 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45d717cb-e1e2-4d86-b662-14eba800a001-serving-cert\") pod \"45d717cb-e1e2-4d86-b662-14eba800a001\" (UID: \"45d717cb-e1e2-4d86-b662-14eba800a001\") " Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.266796 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-config\") pod \"controller-manager-58498d946f-zh2tv\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.266823 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-proxy-ca-bundles\") pod \"controller-manager-58498d946f-zh2tv\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.266856 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-serving-cert\") pod \"controller-manager-58498d946f-zh2tv\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.266878 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z6w2\" (UniqueName: \"kubernetes.io/projected/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-kube-api-access-8z6w2\") pod \"controller-manager-58498d946f-zh2tv\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.266903 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-client-ca\") pod \"controller-manager-58498d946f-zh2tv\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.268236 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-client-ca" (OuterVolumeSpecName: "client-ca") pod "9bb6bfa7-a3fd-409d-b3e9-fce93104ecea" (UID: "9bb6bfa7-a3fd-409d-b3e9-fce93104ecea"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.268254 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9bb6bfa7-a3fd-409d-b3e9-fce93104ecea" (UID: "9bb6bfa7-a3fd-409d-b3e9-fce93104ecea"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.268260 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45d717cb-e1e2-4d86-b662-14eba800a001-config" (OuterVolumeSpecName: "config") pod "45d717cb-e1e2-4d86-b662-14eba800a001" (UID: "45d717cb-e1e2-4d86-b662-14eba800a001"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.268492 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-config" (OuterVolumeSpecName: "config") pod "9bb6bfa7-a3fd-409d-b3e9-fce93104ecea" (UID: "9bb6bfa7-a3fd-409d-b3e9-fce93104ecea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.269062 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45d717cb-e1e2-4d86-b662-14eba800a001-client-ca" (OuterVolumeSpecName: "client-ca") pod "45d717cb-e1e2-4d86-b662-14eba800a001" (UID: "45d717cb-e1e2-4d86-b662-14eba800a001"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.273236 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9bb6bfa7-a3fd-409d-b3e9-fce93104ecea" (UID: "9bb6bfa7-a3fd-409d-b3e9-fce93104ecea"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.273257 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-kube-api-access-6hhlx" (OuterVolumeSpecName: "kube-api-access-6hhlx") pod "9bb6bfa7-a3fd-409d-b3e9-fce93104ecea" (UID: "9bb6bfa7-a3fd-409d-b3e9-fce93104ecea"). InnerVolumeSpecName "kube-api-access-6hhlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.274703 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d717cb-e1e2-4d86-b662-14eba800a001-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "45d717cb-e1e2-4d86-b662-14eba800a001" (UID: "45d717cb-e1e2-4d86-b662-14eba800a001"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.275786 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d717cb-e1e2-4d86-b662-14eba800a001-kube-api-access-rxvc7" (OuterVolumeSpecName: "kube-api-access-rxvc7") pod "45d717cb-e1e2-4d86-b662-14eba800a001" (UID: "45d717cb-e1e2-4d86-b662-14eba800a001"). InnerVolumeSpecName "kube-api-access-rxvc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.368086 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-client-ca\") pod \"controller-manager-58498d946f-zh2tv\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.368501 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-config\") pod \"controller-manager-58498d946f-zh2tv\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.368617 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-proxy-ca-bundles\") pod \"controller-manager-58498d946f-zh2tv\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.368732 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-serving-cert\") pod \"controller-manager-58498d946f-zh2tv\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.368840 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z6w2\" (UniqueName: \"kubernetes.io/projected/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-kube-api-access-8z6w2\") pod \"controller-manager-58498d946f-zh2tv\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.369338 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.369415 4937 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45d717cb-e1e2-4d86-b662-14eba800a001-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.369481 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxvc7\" (UniqueName: \"kubernetes.io/projected/45d717cb-e1e2-4d86-b662-14eba800a001-kube-api-access-rxvc7\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.369540 4937 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.369647 4937 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.369708 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hhlx\" (UniqueName: \"kubernetes.io/projected/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-kube-api-access-6hhlx\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.369778 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.369836 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45d717cb-e1e2-4d86-b662-14eba800a001-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.369904 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d717cb-e1e2-4d86-b662-14eba800a001-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.370815 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-client-ca\") pod \"controller-manager-58498d946f-zh2tv\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.370863 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-config\") pod \"controller-manager-58498d946f-zh2tv\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.373261 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-proxy-ca-bundles\") pod \"controller-manager-58498d946f-zh2tv\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.378200 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-serving-cert\") pod \"controller-manager-58498d946f-zh2tv\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.391292 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z6w2\" (UniqueName: \"kubernetes.io/projected/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-kube-api-access-8z6w2\") pod \"controller-manager-58498d946f-zh2tv\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.491928 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.730475 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58498d946f-zh2tv"] Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.762820 4937 generic.go:334] "Generic (PLEG): container finished" podID="45d717cb-e1e2-4d86-b662-14eba800a001" containerID="833cdc6a747c46f4e5a38eaf480aa2f52793d76aa8fbabcdc2898de0bceea909" exitCode=0 Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.762940 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.763377 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" event={"ID":"45d717cb-e1e2-4d86-b662-14eba800a001","Type":"ContainerDied","Data":"833cdc6a747c46f4e5a38eaf480aa2f52793d76aa8fbabcdc2898de0bceea909"} Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.763454 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx" event={"ID":"45d717cb-e1e2-4d86-b662-14eba800a001","Type":"ContainerDied","Data":"18c1f46bc93459bbeb9aeb2694bbbdb8f1f736ad70bb144f9a8ec160e7dfb38c"} Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.763475 4937 scope.go:117] "RemoveContainer" containerID="833cdc6a747c46f4e5a38eaf480aa2f52793d76aa8fbabcdc2898de0bceea909" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.765103 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" event={"ID":"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b","Type":"ContainerStarted","Data":"1f045e31d1faa1b7f0b1c8ebd2f9cc52c5f2c36cbc1e6350e20ce389416ae33c"} Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.769540 4937 generic.go:334] "Generic (PLEG): container finished" podID="9bb6bfa7-a3fd-409d-b3e9-fce93104ecea" containerID="e03f98e5eda3e475d9cbbd16e5d3aa60c15dae54b7364ef98985176baf814656" exitCode=0 Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.769570 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" event={"ID":"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea","Type":"ContainerDied","Data":"e03f98e5eda3e475d9cbbd16e5d3aa60c15dae54b7364ef98985176baf814656"} Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.769654 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" event={"ID":"9bb6bfa7-a3fd-409d-b3e9-fce93104ecea","Type":"ContainerDied","Data":"ff537dd9879aa8199702dd5e646b0bad869f1f6972150e492f2da3bc47ed989e"} Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.769710 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54d6d9f464-f66ss" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.790128 4937 scope.go:117] "RemoveContainer" containerID="833cdc6a747c46f4e5a38eaf480aa2f52793d76aa8fbabcdc2898de0bceea909" Jan 23 06:38:29 crc kubenswrapper[4937]: E0123 06:38:29.790982 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"833cdc6a747c46f4e5a38eaf480aa2f52793d76aa8fbabcdc2898de0bceea909\": container with ID starting with 833cdc6a747c46f4e5a38eaf480aa2f52793d76aa8fbabcdc2898de0bceea909 not found: ID does not exist" containerID="833cdc6a747c46f4e5a38eaf480aa2f52793d76aa8fbabcdc2898de0bceea909" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.791034 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"833cdc6a747c46f4e5a38eaf480aa2f52793d76aa8fbabcdc2898de0bceea909"} err="failed to get container status \"833cdc6a747c46f4e5a38eaf480aa2f52793d76aa8fbabcdc2898de0bceea909\": rpc error: code = NotFound desc = could not find container \"833cdc6a747c46f4e5a38eaf480aa2f52793d76aa8fbabcdc2898de0bceea909\": container with ID starting with 833cdc6a747c46f4e5a38eaf480aa2f52793d76aa8fbabcdc2898de0bceea909 not found: ID does not exist" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.791060 4937 scope.go:117] "RemoveContainer" containerID="e03f98e5eda3e475d9cbbd16e5d3aa60c15dae54b7364ef98985176baf814656" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.818419 4937 scope.go:117] "RemoveContainer" containerID="e03f98e5eda3e475d9cbbd16e5d3aa60c15dae54b7364ef98985176baf814656" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.823827 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx"] Jan 23 06:38:29 crc kubenswrapper[4937]: E0123 06:38:29.824527 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e03f98e5eda3e475d9cbbd16e5d3aa60c15dae54b7364ef98985176baf814656\": container with ID starting with e03f98e5eda3e475d9cbbd16e5d3aa60c15dae54b7364ef98985176baf814656 not found: ID does not exist" containerID="e03f98e5eda3e475d9cbbd16e5d3aa60c15dae54b7364ef98985176baf814656" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.824569 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e03f98e5eda3e475d9cbbd16e5d3aa60c15dae54b7364ef98985176baf814656"} err="failed to get container status \"e03f98e5eda3e475d9cbbd16e5d3aa60c15dae54b7364ef98985176baf814656\": rpc error: code = NotFound desc = could not find container \"e03f98e5eda3e475d9cbbd16e5d3aa60c15dae54b7364ef98985176baf814656\": container with ID starting with e03f98e5eda3e475d9cbbd16e5d3aa60c15dae54b7364ef98985176baf814656 not found: ID does not exist" Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.830564 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6596f69-sdfgx"] Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.848670 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54d6d9f464-f66ss"] Jan 23 06:38:29 crc kubenswrapper[4937]: I0123 06:38:29.852042 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-54d6d9f464-f66ss"] Jan 23 06:38:30 crc kubenswrapper[4937]: I0123 06:38:30.538244 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45d717cb-e1e2-4d86-b662-14eba800a001" path="/var/lib/kubelet/pods/45d717cb-e1e2-4d86-b662-14eba800a001/volumes" Jan 23 06:38:30 crc kubenswrapper[4937]: I0123 06:38:30.539952 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bb6bfa7-a3fd-409d-b3e9-fce93104ecea" path="/var/lib/kubelet/pods/9bb6bfa7-a3fd-409d-b3e9-fce93104ecea/volumes" Jan 23 06:38:30 crc kubenswrapper[4937]: I0123 06:38:30.784297 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" event={"ID":"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b","Type":"ContainerStarted","Data":"b57d16daf2958da4f9ad82be99879d812cb051e55f63d9a5ac90ef484ca916aa"} Jan 23 06:38:30 crc kubenswrapper[4937]: I0123 06:38:30.784579 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:38:30 crc kubenswrapper[4937]: I0123 06:38:30.796046 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:38:30 crc kubenswrapper[4937]: I0123 06:38:30.810772 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" podStartSLOduration=3.810732713 podStartE2EDuration="3.810732713s" podCreationTimestamp="2026-01-23 06:38:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:38:30.809248452 +0000 UTC m=+310.613015105" watchObservedRunningTime="2026-01-23 06:38:30.810732713 +0000 UTC m=+310.614499466" Jan 23 06:38:31 crc kubenswrapper[4937]: I0123 06:38:31.622800 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 06:38:31 crc kubenswrapper[4937]: I0123 06:38:31.888701 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj"] Jan 23 06:38:31 crc kubenswrapper[4937]: I0123 06:38:31.890886 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" Jan 23 06:38:31 crc kubenswrapper[4937]: I0123 06:38:31.907372 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 06:38:31 crc kubenswrapper[4937]: I0123 06:38:31.907506 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 06:38:31 crc kubenswrapper[4937]: I0123 06:38:31.908302 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 06:38:31 crc kubenswrapper[4937]: I0123 06:38:31.908880 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 06:38:31 crc kubenswrapper[4937]: I0123 06:38:31.909223 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 06:38:31 crc kubenswrapper[4937]: I0123 06:38:31.909645 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 06:38:31 crc kubenswrapper[4937]: I0123 06:38:31.919824 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj"] Jan 23 06:38:32 crc kubenswrapper[4937]: I0123 06:38:32.014560 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24cvb\" (UniqueName: \"kubernetes.io/projected/6102d936-6a38-4997-84eb-b282b2bfa48d-kube-api-access-24cvb\") pod \"route-controller-manager-69d8b4dd4d-695dj\" (UID: \"6102d936-6a38-4997-84eb-b282b2bfa48d\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" Jan 23 06:38:32 crc kubenswrapper[4937]: I0123 06:38:32.015341 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6102d936-6a38-4997-84eb-b282b2bfa48d-client-ca\") pod \"route-controller-manager-69d8b4dd4d-695dj\" (UID: \"6102d936-6a38-4997-84eb-b282b2bfa48d\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" Jan 23 06:38:32 crc kubenswrapper[4937]: I0123 06:38:32.015524 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6102d936-6a38-4997-84eb-b282b2bfa48d-serving-cert\") pod \"route-controller-manager-69d8b4dd4d-695dj\" (UID: \"6102d936-6a38-4997-84eb-b282b2bfa48d\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" Jan 23 06:38:32 crc kubenswrapper[4937]: I0123 06:38:32.015771 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6102d936-6a38-4997-84eb-b282b2bfa48d-config\") pod \"route-controller-manager-69d8b4dd4d-695dj\" (UID: \"6102d936-6a38-4997-84eb-b282b2bfa48d\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" Jan 23 06:38:32 crc kubenswrapper[4937]: I0123 06:38:32.116422 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6102d936-6a38-4997-84eb-b282b2bfa48d-client-ca\") pod \"route-controller-manager-69d8b4dd4d-695dj\" (UID: \"6102d936-6a38-4997-84eb-b282b2bfa48d\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" Jan 23 06:38:32 crc kubenswrapper[4937]: I0123 06:38:32.116462 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6102d936-6a38-4997-84eb-b282b2bfa48d-serving-cert\") pod \"route-controller-manager-69d8b4dd4d-695dj\" (UID: \"6102d936-6a38-4997-84eb-b282b2bfa48d\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" Jan 23 06:38:32 crc kubenswrapper[4937]: I0123 06:38:32.116498 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6102d936-6a38-4997-84eb-b282b2bfa48d-config\") pod \"route-controller-manager-69d8b4dd4d-695dj\" (UID: \"6102d936-6a38-4997-84eb-b282b2bfa48d\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" Jan 23 06:38:32 crc kubenswrapper[4937]: I0123 06:38:32.116534 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24cvb\" (UniqueName: \"kubernetes.io/projected/6102d936-6a38-4997-84eb-b282b2bfa48d-kube-api-access-24cvb\") pod \"route-controller-manager-69d8b4dd4d-695dj\" (UID: \"6102d936-6a38-4997-84eb-b282b2bfa48d\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" Jan 23 06:38:32 crc kubenswrapper[4937]: I0123 06:38:32.117802 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6102d936-6a38-4997-84eb-b282b2bfa48d-config\") pod \"route-controller-manager-69d8b4dd4d-695dj\" (UID: \"6102d936-6a38-4997-84eb-b282b2bfa48d\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" Jan 23 06:38:32 crc kubenswrapper[4937]: I0123 06:38:32.120166 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6102d936-6a38-4997-84eb-b282b2bfa48d-client-ca\") pod \"route-controller-manager-69d8b4dd4d-695dj\" (UID: \"6102d936-6a38-4997-84eb-b282b2bfa48d\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" Jan 23 06:38:32 crc kubenswrapper[4937]: I0123 06:38:32.127898 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6102d936-6a38-4997-84eb-b282b2bfa48d-serving-cert\") pod \"route-controller-manager-69d8b4dd4d-695dj\" (UID: \"6102d936-6a38-4997-84eb-b282b2bfa48d\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" Jan 23 06:38:32 crc kubenswrapper[4937]: I0123 06:38:32.133691 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24cvb\" (UniqueName: \"kubernetes.io/projected/6102d936-6a38-4997-84eb-b282b2bfa48d-kube-api-access-24cvb\") pod \"route-controller-manager-69d8b4dd4d-695dj\" (UID: \"6102d936-6a38-4997-84eb-b282b2bfa48d\") " pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" Jan 23 06:38:32 crc kubenswrapper[4937]: I0123 06:38:32.226464 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" Jan 23 06:38:32 crc kubenswrapper[4937]: I0123 06:38:32.731161 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj"] Jan 23 06:38:32 crc kubenswrapper[4937]: I0123 06:38:32.798092 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" event={"ID":"6102d936-6a38-4997-84eb-b282b2bfa48d","Type":"ContainerStarted","Data":"243aaf615e662bab6dc905d0ba50bbfa30bc7f26923006b9a9da165b405fdf98"} Jan 23 06:38:33 crc kubenswrapper[4937]: I0123 06:38:33.806551 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" event={"ID":"6102d936-6a38-4997-84eb-b282b2bfa48d","Type":"ContainerStarted","Data":"8e299c01dbce65d9e4f0d7e12fa838417113419e57f08c24bb7cb0f0fbefca43"} Jan 23 06:38:33 crc kubenswrapper[4937]: I0123 06:38:33.807161 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" Jan 23 06:38:33 crc kubenswrapper[4937]: I0123 06:38:33.816101 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" Jan 23 06:38:33 crc kubenswrapper[4937]: I0123 06:38:33.833961 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" podStartSLOduration=6.833924218 podStartE2EDuration="6.833924218s" podCreationTimestamp="2026-01-23 06:38:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:38:33.833891927 +0000 UTC m=+313.637658650" watchObservedRunningTime="2026-01-23 06:38:33.833924218 +0000 UTC m=+313.637690911" Jan 23 06:38:40 crc kubenswrapper[4937]: I0123 06:38:40.410509 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 06:39:05 crc kubenswrapper[4937]: I0123 06:39:05.188786 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj"] Jan 23 06:39:05 crc kubenswrapper[4937]: I0123 06:39:05.190301 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" podUID="6102d936-6a38-4997-84eb-b282b2bfa48d" containerName="route-controller-manager" containerID="cri-o://8e299c01dbce65d9e4f0d7e12fa838417113419e57f08c24bb7cb0f0fbefca43" gracePeriod=30 Jan 23 06:39:05 crc kubenswrapper[4937]: I0123 06:39:05.635147 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" Jan 23 06:39:05 crc kubenswrapper[4937]: I0123 06:39:05.734640 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24cvb\" (UniqueName: \"kubernetes.io/projected/6102d936-6a38-4997-84eb-b282b2bfa48d-kube-api-access-24cvb\") pod \"6102d936-6a38-4997-84eb-b282b2bfa48d\" (UID: \"6102d936-6a38-4997-84eb-b282b2bfa48d\") " Jan 23 06:39:05 crc kubenswrapper[4937]: I0123 06:39:05.734712 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6102d936-6a38-4997-84eb-b282b2bfa48d-serving-cert\") pod \"6102d936-6a38-4997-84eb-b282b2bfa48d\" (UID: \"6102d936-6a38-4997-84eb-b282b2bfa48d\") " Jan 23 06:39:05 crc kubenswrapper[4937]: I0123 06:39:05.734767 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6102d936-6a38-4997-84eb-b282b2bfa48d-config\") pod \"6102d936-6a38-4997-84eb-b282b2bfa48d\" (UID: \"6102d936-6a38-4997-84eb-b282b2bfa48d\") " Jan 23 06:39:05 crc kubenswrapper[4937]: I0123 06:39:05.734809 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6102d936-6a38-4997-84eb-b282b2bfa48d-client-ca\") pod \"6102d936-6a38-4997-84eb-b282b2bfa48d\" (UID: \"6102d936-6a38-4997-84eb-b282b2bfa48d\") " Jan 23 06:39:05 crc kubenswrapper[4937]: I0123 06:39:05.735967 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6102d936-6a38-4997-84eb-b282b2bfa48d-client-ca" (OuterVolumeSpecName: "client-ca") pod "6102d936-6a38-4997-84eb-b282b2bfa48d" (UID: "6102d936-6a38-4997-84eb-b282b2bfa48d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:39:05 crc kubenswrapper[4937]: I0123 06:39:05.736037 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6102d936-6a38-4997-84eb-b282b2bfa48d-config" (OuterVolumeSpecName: "config") pod "6102d936-6a38-4997-84eb-b282b2bfa48d" (UID: "6102d936-6a38-4997-84eb-b282b2bfa48d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:39:05 crc kubenswrapper[4937]: I0123 06:39:05.744885 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6102d936-6a38-4997-84eb-b282b2bfa48d-kube-api-access-24cvb" (OuterVolumeSpecName: "kube-api-access-24cvb") pod "6102d936-6a38-4997-84eb-b282b2bfa48d" (UID: "6102d936-6a38-4997-84eb-b282b2bfa48d"). InnerVolumeSpecName "kube-api-access-24cvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:39:05 crc kubenswrapper[4937]: I0123 06:39:05.744900 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6102d936-6a38-4997-84eb-b282b2bfa48d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6102d936-6a38-4997-84eb-b282b2bfa48d" (UID: "6102d936-6a38-4997-84eb-b282b2bfa48d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:39:05 crc kubenswrapper[4937]: I0123 06:39:05.836469 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24cvb\" (UniqueName: \"kubernetes.io/projected/6102d936-6a38-4997-84eb-b282b2bfa48d-kube-api-access-24cvb\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:05 crc kubenswrapper[4937]: I0123 06:39:05.836514 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6102d936-6a38-4997-84eb-b282b2bfa48d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:05 crc kubenswrapper[4937]: I0123 06:39:05.836530 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6102d936-6a38-4997-84eb-b282b2bfa48d-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:05 crc kubenswrapper[4937]: I0123 06:39:05.836539 4937 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6102d936-6a38-4997-84eb-b282b2bfa48d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.027223 4937 generic.go:334] "Generic (PLEG): container finished" podID="6102d936-6a38-4997-84eb-b282b2bfa48d" containerID="8e299c01dbce65d9e4f0d7e12fa838417113419e57f08c24bb7cb0f0fbefca43" exitCode=0 Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.027346 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" event={"ID":"6102d936-6a38-4997-84eb-b282b2bfa48d","Type":"ContainerDied","Data":"8e299c01dbce65d9e4f0d7e12fa838417113419e57f08c24bb7cb0f0fbefca43"} Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.027404 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" event={"ID":"6102d936-6a38-4997-84eb-b282b2bfa48d","Type":"ContainerDied","Data":"243aaf615e662bab6dc905d0ba50bbfa30bc7f26923006b9a9da165b405fdf98"} Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.027317 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj" Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.027435 4937 scope.go:117] "RemoveContainer" containerID="8e299c01dbce65d9e4f0d7e12fa838417113419e57f08c24bb7cb0f0fbefca43" Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.053508 4937 scope.go:117] "RemoveContainer" containerID="8e299c01dbce65d9e4f0d7e12fa838417113419e57f08c24bb7cb0f0fbefca43" Jan 23 06:39:06 crc kubenswrapper[4937]: E0123 06:39:06.054115 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e299c01dbce65d9e4f0d7e12fa838417113419e57f08c24bb7cb0f0fbefca43\": container with ID starting with 8e299c01dbce65d9e4f0d7e12fa838417113419e57f08c24bb7cb0f0fbefca43 not found: ID does not exist" containerID="8e299c01dbce65d9e4f0d7e12fa838417113419e57f08c24bb7cb0f0fbefca43" Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.054168 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e299c01dbce65d9e4f0d7e12fa838417113419e57f08c24bb7cb0f0fbefca43"} err="failed to get container status \"8e299c01dbce65d9e4f0d7e12fa838417113419e57f08c24bb7cb0f0fbefca43\": rpc error: code = NotFound desc = could not find container \"8e299c01dbce65d9e4f0d7e12fa838417113419e57f08c24bb7cb0f0fbefca43\": container with ID starting with 8e299c01dbce65d9e4f0d7e12fa838417113419e57f08c24bb7cb0f0fbefca43 not found: ID does not exist" Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.074134 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj"] Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.077967 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d8b4dd4d-695dj"] Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.544551 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6102d936-6a38-4997-84eb-b282b2bfa48d" path="/var/lib/kubelet/pods/6102d936-6a38-4997-84eb-b282b2bfa48d/volumes" Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.918382 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q"] Jan 23 06:39:06 crc kubenswrapper[4937]: E0123 06:39:06.919470 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6102d936-6a38-4997-84eb-b282b2bfa48d" containerName="route-controller-manager" Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.919657 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="6102d936-6a38-4997-84eb-b282b2bfa48d" containerName="route-controller-manager" Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.919942 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="6102d936-6a38-4997-84eb-b282b2bfa48d" containerName="route-controller-manager" Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.920738 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q" Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.923877 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.924039 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.924061 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.924111 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.924299 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.925830 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 06:39:06 crc kubenswrapper[4937]: I0123 06:39:06.935978 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q"] Jan 23 06:39:07 crc kubenswrapper[4937]: I0123 06:39:07.056557 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ac24e2-ea0b-4886-8211-2438b2b55c68-config\") pod \"route-controller-manager-654845bdf-wt89q\" (UID: \"63ac24e2-ea0b-4886-8211-2438b2b55c68\") " pod="openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q" Jan 23 06:39:07 crc kubenswrapper[4937]: I0123 06:39:07.056648 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63ac24e2-ea0b-4886-8211-2438b2b55c68-serving-cert\") pod \"route-controller-manager-654845bdf-wt89q\" (UID: \"63ac24e2-ea0b-4886-8211-2438b2b55c68\") " pod="openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q" Jan 23 06:39:07 crc kubenswrapper[4937]: I0123 06:39:07.057814 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63ac24e2-ea0b-4886-8211-2438b2b55c68-client-ca\") pod \"route-controller-manager-654845bdf-wt89q\" (UID: \"63ac24e2-ea0b-4886-8211-2438b2b55c68\") " pod="openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q" Jan 23 06:39:07 crc kubenswrapper[4937]: I0123 06:39:07.057876 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp6mq\" (UniqueName: \"kubernetes.io/projected/63ac24e2-ea0b-4886-8211-2438b2b55c68-kube-api-access-dp6mq\") pod \"route-controller-manager-654845bdf-wt89q\" (UID: \"63ac24e2-ea0b-4886-8211-2438b2b55c68\") " pod="openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q" Jan 23 06:39:07 crc kubenswrapper[4937]: I0123 06:39:07.159125 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ac24e2-ea0b-4886-8211-2438b2b55c68-config\") pod \"route-controller-manager-654845bdf-wt89q\" (UID: \"63ac24e2-ea0b-4886-8211-2438b2b55c68\") " pod="openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q" Jan 23 06:39:07 crc kubenswrapper[4937]: I0123 06:39:07.159642 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63ac24e2-ea0b-4886-8211-2438b2b55c68-serving-cert\") pod \"route-controller-manager-654845bdf-wt89q\" (UID: \"63ac24e2-ea0b-4886-8211-2438b2b55c68\") " pod="openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q" Jan 23 06:39:07 crc kubenswrapper[4937]: I0123 06:39:07.159906 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63ac24e2-ea0b-4886-8211-2438b2b55c68-client-ca\") pod \"route-controller-manager-654845bdf-wt89q\" (UID: \"63ac24e2-ea0b-4886-8211-2438b2b55c68\") " pod="openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q" Jan 23 06:39:07 crc kubenswrapper[4937]: I0123 06:39:07.160139 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp6mq\" (UniqueName: \"kubernetes.io/projected/63ac24e2-ea0b-4886-8211-2438b2b55c68-kube-api-access-dp6mq\") pod \"route-controller-manager-654845bdf-wt89q\" (UID: \"63ac24e2-ea0b-4886-8211-2438b2b55c68\") " pod="openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q" Jan 23 06:39:07 crc kubenswrapper[4937]: I0123 06:39:07.161738 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ac24e2-ea0b-4886-8211-2438b2b55c68-config\") pod \"route-controller-manager-654845bdf-wt89q\" (UID: \"63ac24e2-ea0b-4886-8211-2438b2b55c68\") " pod="openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q" Jan 23 06:39:07 crc kubenswrapper[4937]: I0123 06:39:07.162289 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63ac24e2-ea0b-4886-8211-2438b2b55c68-client-ca\") pod \"route-controller-manager-654845bdf-wt89q\" (UID: \"63ac24e2-ea0b-4886-8211-2438b2b55c68\") " pod="openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q" Jan 23 06:39:07 crc kubenswrapper[4937]: I0123 06:39:07.172302 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63ac24e2-ea0b-4886-8211-2438b2b55c68-serving-cert\") pod \"route-controller-manager-654845bdf-wt89q\" (UID: \"63ac24e2-ea0b-4886-8211-2438b2b55c68\") " pod="openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q" Jan 23 06:39:07 crc kubenswrapper[4937]: I0123 06:39:07.195903 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp6mq\" (UniqueName: \"kubernetes.io/projected/63ac24e2-ea0b-4886-8211-2438b2b55c68-kube-api-access-dp6mq\") pod \"route-controller-manager-654845bdf-wt89q\" (UID: \"63ac24e2-ea0b-4886-8211-2438b2b55c68\") " pod="openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q" Jan 23 06:39:07 crc kubenswrapper[4937]: I0123 06:39:07.247041 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q" Jan 23 06:39:07 crc kubenswrapper[4937]: I0123 06:39:07.786239 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q"] Jan 23 06:39:08 crc kubenswrapper[4937]: I0123 06:39:08.045217 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q" event={"ID":"63ac24e2-ea0b-4886-8211-2438b2b55c68","Type":"ContainerStarted","Data":"f7e7b5bfe6e28905a3f04b2a716cb94a4ecdd40f86cc276af07d2ac9b354d523"} Jan 23 06:39:08 crc kubenswrapper[4937]: I0123 06:39:08.045625 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q" event={"ID":"63ac24e2-ea0b-4886-8211-2438b2b55c68","Type":"ContainerStarted","Data":"cdda75ca45cdff913c67758013cda81a5db157beb7bd4ecacb36b66ca5b6636c"} Jan 23 06:39:08 crc kubenswrapper[4937]: I0123 06:39:08.045992 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q" Jan 23 06:39:08 crc kubenswrapper[4937]: I0123 06:39:08.304900 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q" Jan 23 06:39:08 crc kubenswrapper[4937]: I0123 06:39:08.333900 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-654845bdf-wt89q" podStartSLOduration=3.333879722 podStartE2EDuration="3.333879722s" podCreationTimestamp="2026-01-23 06:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:39:08.077897599 +0000 UTC m=+347.881664252" watchObservedRunningTime="2026-01-23 06:39:08.333879722 +0000 UTC m=+348.137646395" Jan 23 06:39:13 crc kubenswrapper[4937]: I0123 06:39:13.793571 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-742j8"] Jan 23 06:39:13 crc kubenswrapper[4937]: I0123 06:39:13.795004 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-742j8" podUID="a5003ecc-816a-494a-b773-0edb552a55f3" containerName="registry-server" containerID="cri-o://7b683fcd6ea0ef62d4427e07e323fdfabe3c21b592fb6dd8ea31543e4afeac44" gracePeriod=30 Jan 23 06:39:13 crc kubenswrapper[4937]: I0123 06:39:13.817203 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qk5qp"] Jan 23 06:39:13 crc kubenswrapper[4937]: I0123 06:39:13.817664 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qk5qp" podUID="89c5c12d-911e-4ba4-b4ac-2121b26efdcb" containerName="registry-server" containerID="cri-o://8f97a13b89840ea0d9fdf1f6abb45a940039aeb401ccee052994c0b514bbc858" gracePeriod=30 Jan 23 06:39:13 crc kubenswrapper[4937]: I0123 06:39:13.828466 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7xmf5"] Jan 23 06:39:13 crc kubenswrapper[4937]: I0123 06:39:13.828757 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" podUID="dda6ba11-61ab-4501-afa3-5bb654f352ea" containerName="marketplace-operator" containerID="cri-o://6ddc27e9c0ae32be0e98701b084848dac66a8a2bad106a76e4f2c97fd6bce4a5" gracePeriod=30 Jan 23 06:39:13 crc kubenswrapper[4937]: I0123 06:39:13.849736 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hs5k4"] Jan 23 06:39:13 crc kubenswrapper[4937]: I0123 06:39:13.849843 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8mqth"] Jan 23 06:39:13 crc kubenswrapper[4937]: I0123 06:39:13.851930 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hs5k4" podUID="ee63b0c9-aed6-4be4-9987-c27fe197911d" containerName="registry-server" containerID="cri-o://f1c3ba9425ef2296b960ac2637cf0b6ca561bf0a2d916ad9d324c21f427ee785" gracePeriod=30 Jan 23 06:39:13 crc kubenswrapper[4937]: I0123 06:39:13.857416 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t9rt4"] Jan 23 06:39:13 crc kubenswrapper[4937]: I0123 06:39:13.857546 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8mqth" Jan 23 06:39:13 crc kubenswrapper[4937]: I0123 06:39:13.861442 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8mqth"] Jan 23 06:39:13 crc kubenswrapper[4937]: I0123 06:39:13.861872 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t9rt4" podUID="f8f13571-ed06-4bcc-825f-8bc5915ab5a7" containerName="registry-server" containerID="cri-o://183bdc0370b31ecb970cbc49ba456485a41bfbbb0d32c9cb58b94a2f27fc118a" gracePeriod=30 Jan 23 06:39:13 crc kubenswrapper[4937]: I0123 06:39:13.956953 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/936ee148-8015-4156-9a4b-c394c173f197-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8mqth\" (UID: \"936ee148-8015-4156-9a4b-c394c173f197\") " pod="openshift-marketplace/marketplace-operator-79b997595-8mqth" Jan 23 06:39:13 crc kubenswrapper[4937]: I0123 06:39:13.957076 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/936ee148-8015-4156-9a4b-c394c173f197-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8mqth\" (UID: \"936ee148-8015-4156-9a4b-c394c173f197\") " pod="openshift-marketplace/marketplace-operator-79b997595-8mqth" Jan 23 06:39:13 crc kubenswrapper[4937]: I0123 06:39:13.957117 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnnvp\" (UniqueName: \"kubernetes.io/projected/936ee148-8015-4156-9a4b-c394c173f197-kube-api-access-pnnvp\") pod \"marketplace-operator-79b997595-8mqth\" (UID: \"936ee148-8015-4156-9a4b-c394c173f197\") " pod="openshift-marketplace/marketplace-operator-79b997595-8mqth" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.058312 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/936ee148-8015-4156-9a4b-c394c173f197-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8mqth\" (UID: \"936ee148-8015-4156-9a4b-c394c173f197\") " pod="openshift-marketplace/marketplace-operator-79b997595-8mqth" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.058731 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/936ee148-8015-4156-9a4b-c394c173f197-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8mqth\" (UID: \"936ee148-8015-4156-9a4b-c394c173f197\") " pod="openshift-marketplace/marketplace-operator-79b997595-8mqth" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.058768 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnnvp\" (UniqueName: \"kubernetes.io/projected/936ee148-8015-4156-9a4b-c394c173f197-kube-api-access-pnnvp\") pod \"marketplace-operator-79b997595-8mqth\" (UID: \"936ee148-8015-4156-9a4b-c394c173f197\") " pod="openshift-marketplace/marketplace-operator-79b997595-8mqth" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.059965 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/936ee148-8015-4156-9a4b-c394c173f197-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8mqth\" (UID: \"936ee148-8015-4156-9a4b-c394c173f197\") " pod="openshift-marketplace/marketplace-operator-79b997595-8mqth" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.070134 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/936ee148-8015-4156-9a4b-c394c173f197-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8mqth\" (UID: \"936ee148-8015-4156-9a4b-c394c173f197\") " pod="openshift-marketplace/marketplace-operator-79b997595-8mqth" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.078284 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnnvp\" (UniqueName: \"kubernetes.io/projected/936ee148-8015-4156-9a4b-c394c173f197-kube-api-access-pnnvp\") pod \"marketplace-operator-79b997595-8mqth\" (UID: \"936ee148-8015-4156-9a4b-c394c173f197\") " pod="openshift-marketplace/marketplace-operator-79b997595-8mqth" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.089668 4937 generic.go:334] "Generic (PLEG): container finished" podID="ee63b0c9-aed6-4be4-9987-c27fe197911d" containerID="f1c3ba9425ef2296b960ac2637cf0b6ca561bf0a2d916ad9d324c21f427ee785" exitCode=0 Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.089735 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hs5k4" event={"ID":"ee63b0c9-aed6-4be4-9987-c27fe197911d","Type":"ContainerDied","Data":"f1c3ba9425ef2296b960ac2637cf0b6ca561bf0a2d916ad9d324c21f427ee785"} Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.091048 4937 generic.go:334] "Generic (PLEG): container finished" podID="a5003ecc-816a-494a-b773-0edb552a55f3" containerID="7b683fcd6ea0ef62d4427e07e323fdfabe3c21b592fb6dd8ea31543e4afeac44" exitCode=0 Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.091087 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-742j8" event={"ID":"a5003ecc-816a-494a-b773-0edb552a55f3","Type":"ContainerDied","Data":"7b683fcd6ea0ef62d4427e07e323fdfabe3c21b592fb6dd8ea31543e4afeac44"} Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.092453 4937 generic.go:334] "Generic (PLEG): container finished" podID="f8f13571-ed06-4bcc-825f-8bc5915ab5a7" containerID="183bdc0370b31ecb970cbc49ba456485a41bfbbb0d32c9cb58b94a2f27fc118a" exitCode=0 Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.092521 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9rt4" event={"ID":"f8f13571-ed06-4bcc-825f-8bc5915ab5a7","Type":"ContainerDied","Data":"183bdc0370b31ecb970cbc49ba456485a41bfbbb0d32c9cb58b94a2f27fc118a"} Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.104963 4937 generic.go:334] "Generic (PLEG): container finished" podID="89c5c12d-911e-4ba4-b4ac-2121b26efdcb" containerID="8f97a13b89840ea0d9fdf1f6abb45a940039aeb401ccee052994c0b514bbc858" exitCode=0 Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.105067 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qk5qp" event={"ID":"89c5c12d-911e-4ba4-b4ac-2121b26efdcb","Type":"ContainerDied","Data":"8f97a13b89840ea0d9fdf1f6abb45a940039aeb401ccee052994c0b514bbc858"} Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.132630 4937 generic.go:334] "Generic (PLEG): container finished" podID="dda6ba11-61ab-4501-afa3-5bb654f352ea" containerID="6ddc27e9c0ae32be0e98701b084848dac66a8a2bad106a76e4f2c97fd6bce4a5" exitCode=0 Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.132681 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" event={"ID":"dda6ba11-61ab-4501-afa3-5bb654f352ea","Type":"ContainerDied","Data":"6ddc27e9c0ae32be0e98701b084848dac66a8a2bad106a76e4f2c97fd6bce4a5"} Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.132743 4937 scope.go:117] "RemoveContainer" containerID="8472eb38158528f591e151fb55471c861b866310e1c09e0a8ead7e695c108f56" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.234093 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8mqth" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.320800 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-742j8" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.463661 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5003ecc-816a-494a-b773-0edb552a55f3-utilities\") pod \"a5003ecc-816a-494a-b773-0edb552a55f3\" (UID: \"a5003ecc-816a-494a-b773-0edb552a55f3\") " Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.464043 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5003ecc-816a-494a-b773-0edb552a55f3-catalog-content\") pod \"a5003ecc-816a-494a-b773-0edb552a55f3\" (UID: \"a5003ecc-816a-494a-b773-0edb552a55f3\") " Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.464090 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhj2r\" (UniqueName: \"kubernetes.io/projected/a5003ecc-816a-494a-b773-0edb552a55f3-kube-api-access-jhj2r\") pod \"a5003ecc-816a-494a-b773-0edb552a55f3\" (UID: \"a5003ecc-816a-494a-b773-0edb552a55f3\") " Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.464852 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5003ecc-816a-494a-b773-0edb552a55f3-utilities" (OuterVolumeSpecName: "utilities") pod "a5003ecc-816a-494a-b773-0edb552a55f3" (UID: "a5003ecc-816a-494a-b773-0edb552a55f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.465902 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9rt4" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.470189 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5003ecc-816a-494a-b773-0edb552a55f3-kube-api-access-jhj2r" (OuterVolumeSpecName: "kube-api-access-jhj2r") pod "a5003ecc-816a-494a-b773-0edb552a55f3" (UID: "a5003ecc-816a-494a-b773-0edb552a55f3"). InnerVolumeSpecName "kube-api-access-jhj2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.472654 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qk5qp" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.494230 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.516236 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5003ecc-816a-494a-b773-0edb552a55f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5003ecc-816a-494a-b773-0edb552a55f3" (UID: "a5003ecc-816a-494a-b773-0edb552a55f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.525801 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hs5k4" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.565691 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgfdd\" (UniqueName: \"kubernetes.io/projected/f8f13571-ed06-4bcc-825f-8bc5915ab5a7-kube-api-access-zgfdd\") pod \"f8f13571-ed06-4bcc-825f-8bc5915ab5a7\" (UID: \"f8f13571-ed06-4bcc-825f-8bc5915ab5a7\") " Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.565745 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8f13571-ed06-4bcc-825f-8bc5915ab5a7-utilities\") pod \"f8f13571-ed06-4bcc-825f-8bc5915ab5a7\" (UID: \"f8f13571-ed06-4bcc-825f-8bc5915ab5a7\") " Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.565817 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8f13571-ed06-4bcc-825f-8bc5915ab5a7-catalog-content\") pod \"f8f13571-ed06-4bcc-825f-8bc5915ab5a7\" (UID: \"f8f13571-ed06-4bcc-825f-8bc5915ab5a7\") " Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.566091 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhj2r\" (UniqueName: \"kubernetes.io/projected/a5003ecc-816a-494a-b773-0edb552a55f3-kube-api-access-jhj2r\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.566105 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5003ecc-816a-494a-b773-0edb552a55f3-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.566114 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5003ecc-816a-494a-b773-0edb552a55f3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.570559 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8f13571-ed06-4bcc-825f-8bc5915ab5a7-utilities" (OuterVolumeSpecName: "utilities") pod "f8f13571-ed06-4bcc-825f-8bc5915ab5a7" (UID: "f8f13571-ed06-4bcc-825f-8bc5915ab5a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.571173 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8f13571-ed06-4bcc-825f-8bc5915ab5a7-kube-api-access-zgfdd" (OuterVolumeSpecName: "kube-api-access-zgfdd") pod "f8f13571-ed06-4bcc-825f-8bc5915ab5a7" (UID: "f8f13571-ed06-4bcc-825f-8bc5915ab5a7"). InnerVolumeSpecName "kube-api-access-zgfdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.667515 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c5c12d-911e-4ba4-b4ac-2121b26efdcb-catalog-content\") pod \"89c5c12d-911e-4ba4-b4ac-2121b26efdcb\" (UID: \"89c5c12d-911e-4ba4-b4ac-2121b26efdcb\") " Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.667572 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee63b0c9-aed6-4be4-9987-c27fe197911d-utilities\") pod \"ee63b0c9-aed6-4be4-9987-c27fe197911d\" (UID: \"ee63b0c9-aed6-4be4-9987-c27fe197911d\") " Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.667653 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c5c12d-911e-4ba4-b4ac-2121b26efdcb-utilities\") pod \"89c5c12d-911e-4ba4-b4ac-2121b26efdcb\" (UID: \"89c5c12d-911e-4ba4-b4ac-2121b26efdcb\") " Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.667682 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee63b0c9-aed6-4be4-9987-c27fe197911d-catalog-content\") pod \"ee63b0c9-aed6-4be4-9987-c27fe197911d\" (UID: \"ee63b0c9-aed6-4be4-9987-c27fe197911d\") " Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.667699 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgc95\" (UniqueName: \"kubernetes.io/projected/dda6ba11-61ab-4501-afa3-5bb654f352ea-kube-api-access-qgc95\") pod \"dda6ba11-61ab-4501-afa3-5bb654f352ea\" (UID: \"dda6ba11-61ab-4501-afa3-5bb654f352ea\") " Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.667731 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dda6ba11-61ab-4501-afa3-5bb654f352ea-marketplace-trusted-ca\") pod \"dda6ba11-61ab-4501-afa3-5bb654f352ea\" (UID: \"dda6ba11-61ab-4501-afa3-5bb654f352ea\") " Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.667747 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhjxn\" (UniqueName: \"kubernetes.io/projected/ee63b0c9-aed6-4be4-9987-c27fe197911d-kube-api-access-bhjxn\") pod \"ee63b0c9-aed6-4be4-9987-c27fe197911d\" (UID: \"ee63b0c9-aed6-4be4-9987-c27fe197911d\") " Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.667764 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4wkj\" (UniqueName: \"kubernetes.io/projected/89c5c12d-911e-4ba4-b4ac-2121b26efdcb-kube-api-access-d4wkj\") pod \"89c5c12d-911e-4ba4-b4ac-2121b26efdcb\" (UID: \"89c5c12d-911e-4ba4-b4ac-2121b26efdcb\") " Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.667799 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dda6ba11-61ab-4501-afa3-5bb654f352ea-marketplace-operator-metrics\") pod \"dda6ba11-61ab-4501-afa3-5bb654f352ea\" (UID: \"dda6ba11-61ab-4501-afa3-5bb654f352ea\") " Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.667976 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgfdd\" (UniqueName: \"kubernetes.io/projected/f8f13571-ed06-4bcc-825f-8bc5915ab5a7-kube-api-access-zgfdd\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.668910 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dda6ba11-61ab-4501-afa3-5bb654f352ea-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "dda6ba11-61ab-4501-afa3-5bb654f352ea" (UID: "dda6ba11-61ab-4501-afa3-5bb654f352ea"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.669014 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee63b0c9-aed6-4be4-9987-c27fe197911d-utilities" (OuterVolumeSpecName: "utilities") pod "ee63b0c9-aed6-4be4-9987-c27fe197911d" (UID: "ee63b0c9-aed6-4be4-9987-c27fe197911d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.669142 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89c5c12d-911e-4ba4-b4ac-2121b26efdcb-utilities" (OuterVolumeSpecName: "utilities") pod "89c5c12d-911e-4ba4-b4ac-2121b26efdcb" (UID: "89c5c12d-911e-4ba4-b4ac-2121b26efdcb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.669206 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8f13571-ed06-4bcc-825f-8bc5915ab5a7-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.671793 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dda6ba11-61ab-4501-afa3-5bb654f352ea-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "dda6ba11-61ab-4501-afa3-5bb654f352ea" (UID: "dda6ba11-61ab-4501-afa3-5bb654f352ea"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.672887 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dda6ba11-61ab-4501-afa3-5bb654f352ea-kube-api-access-qgc95" (OuterVolumeSpecName: "kube-api-access-qgc95") pod "dda6ba11-61ab-4501-afa3-5bb654f352ea" (UID: "dda6ba11-61ab-4501-afa3-5bb654f352ea"). InnerVolumeSpecName "kube-api-access-qgc95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.673143 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89c5c12d-911e-4ba4-b4ac-2121b26efdcb-kube-api-access-d4wkj" (OuterVolumeSpecName: "kube-api-access-d4wkj") pod "89c5c12d-911e-4ba4-b4ac-2121b26efdcb" (UID: "89c5c12d-911e-4ba4-b4ac-2121b26efdcb"). InnerVolumeSpecName "kube-api-access-d4wkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.673728 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee63b0c9-aed6-4be4-9987-c27fe197911d-kube-api-access-bhjxn" (OuterVolumeSpecName: "kube-api-access-bhjxn") pod "ee63b0c9-aed6-4be4-9987-c27fe197911d" (UID: "ee63b0c9-aed6-4be4-9987-c27fe197911d"). InnerVolumeSpecName "kube-api-access-bhjxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.690369 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee63b0c9-aed6-4be4-9987-c27fe197911d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee63b0c9-aed6-4be4-9987-c27fe197911d" (UID: "ee63b0c9-aed6-4be4-9987-c27fe197911d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.703011 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8f13571-ed06-4bcc-825f-8bc5915ab5a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8f13571-ed06-4bcc-825f-8bc5915ab5a7" (UID: "f8f13571-ed06-4bcc-825f-8bc5915ab5a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.739971 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89c5c12d-911e-4ba4-b4ac-2121b26efdcb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89c5c12d-911e-4ba4-b4ac-2121b26efdcb" (UID: "89c5c12d-911e-4ba4-b4ac-2121b26efdcb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.770093 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8f13571-ed06-4bcc-825f-8bc5915ab5a7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.770131 4937 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dda6ba11-61ab-4501-afa3-5bb654f352ea-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.770145 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89c5c12d-911e-4ba4-b4ac-2121b26efdcb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.770155 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee63b0c9-aed6-4be4-9987-c27fe197911d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.770164 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89c5c12d-911e-4ba4-b4ac-2121b26efdcb-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.770172 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee63b0c9-aed6-4be4-9987-c27fe197911d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.770181 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgc95\" (UniqueName: \"kubernetes.io/projected/dda6ba11-61ab-4501-afa3-5bb654f352ea-kube-api-access-qgc95\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.770190 4937 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dda6ba11-61ab-4501-afa3-5bb654f352ea-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.770198 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhjxn\" (UniqueName: \"kubernetes.io/projected/ee63b0c9-aed6-4be4-9987-c27fe197911d-kube-api-access-bhjxn\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.770208 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4wkj\" (UniqueName: \"kubernetes.io/projected/89c5c12d-911e-4ba4-b4ac-2121b26efdcb-kube-api-access-d4wkj\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:14 crc kubenswrapper[4937]: I0123 06:39:14.830220 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8mqth"] Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.140296 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8mqth" event={"ID":"936ee148-8015-4156-9a4b-c394c173f197","Type":"ContainerStarted","Data":"505ae733057b84928096ebb92866f5580a2c7fb3e8bec065adbb974f8ae86b52"} Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.140790 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8mqth" event={"ID":"936ee148-8015-4156-9a4b-c394c173f197","Type":"ContainerStarted","Data":"9d332dab69da41d4f4c048060ee125358fe08f3d8b4a356ec7f93a2bf46832cd"} Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.140813 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8mqth" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.142147 4937 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8mqth container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" start-of-body= Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.142226 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8mqth" podUID="936ee148-8015-4156-9a4b-c394c173f197" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.142770 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hs5k4" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.143801 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hs5k4" event={"ID":"ee63b0c9-aed6-4be4-9987-c27fe197911d","Type":"ContainerDied","Data":"6d990283e0ab5fc53f62754d3c38307a2b0fee00abdd4175045d62beafb6f58d"} Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.143842 4937 scope.go:117] "RemoveContainer" containerID="f1c3ba9425ef2296b960ac2637cf0b6ca561bf0a2d916ad9d324c21f427ee785" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.147947 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-742j8" event={"ID":"a5003ecc-816a-494a-b773-0edb552a55f3","Type":"ContainerDied","Data":"27b7f807573b67303206c4899f492ab78ad73413720369ca3200fe4ac1d41760"} Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.148363 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-742j8" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.160950 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9rt4" event={"ID":"f8f13571-ed06-4bcc-825f-8bc5915ab5a7","Type":"ContainerDied","Data":"65a004f28cabe16540c2fb034fcf86d6e153c3924f4d85040b69721f93a76c9c"} Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.161132 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9rt4" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.164988 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8mqth" podStartSLOduration=2.164966985 podStartE2EDuration="2.164966985s" podCreationTimestamp="2026-01-23 06:39:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:39:15.159738632 +0000 UTC m=+354.963505325" watchObservedRunningTime="2026-01-23 06:39:15.164966985 +0000 UTC m=+354.968733648" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.171025 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qk5qp" event={"ID":"89c5c12d-911e-4ba4-b4ac-2121b26efdcb","Type":"ContainerDied","Data":"a51ed2988d63eda175b46a8f81a4ff9046b8a26e50f3f85a3e449c76509dd6ab"} Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.171155 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qk5qp" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.177066 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" event={"ID":"dda6ba11-61ab-4501-afa3-5bb654f352ea","Type":"ContainerDied","Data":"865e5620e83c0c3d592b1381ef914f01be3ed643d6b02d61c548ba8fbabe6593"} Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.177336 4937 scope.go:117] "RemoveContainer" containerID="5565237843dcf1f84efb90ccd7d87e9dff37650c42d10e025080a1c45bd0dc8d" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.177367 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7xmf5" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.181461 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hs5k4"] Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.203736 4937 scope.go:117] "RemoveContainer" containerID="59af0abcc6f069d4bb88b5c7318223165323934f204f5aadbd38e4004b675f4f" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.206090 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hs5k4"] Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.224119 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-742j8"] Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.226836 4937 scope.go:117] "RemoveContainer" containerID="7b683fcd6ea0ef62d4427e07e323fdfabe3c21b592fb6dd8ea31543e4afeac44" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.247955 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-742j8"] Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.250335 4937 scope.go:117] "RemoveContainer" containerID="2f1363c18cddb58037598afc012f89ca3e9df6fafd7c4f1bed5455ffbcf7497d" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.254341 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t9rt4"] Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.259229 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t9rt4"] Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.264711 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qk5qp"] Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.265369 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qk5qp"] Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.268819 4937 scope.go:117] "RemoveContainer" containerID="eb1cfd461d8a338437731c591913702089603ff2558903a595982a8fef94bd6b" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.268966 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7xmf5"] Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.270165 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7xmf5"] Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.287428 4937 scope.go:117] "RemoveContainer" containerID="183bdc0370b31ecb970cbc49ba456485a41bfbbb0d32c9cb58b94a2f27fc118a" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.309671 4937 scope.go:117] "RemoveContainer" containerID="80225fb15d29d93badbcd3ae88f2a1ffc2ca11d2eef59defe5301c713fec8674" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.327226 4937 scope.go:117] "RemoveContainer" containerID="93bad35010a127035bd8ca55f4e195fc212ffbcf499e44b2cf6fac2640178508" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.341599 4937 scope.go:117] "RemoveContainer" containerID="8f97a13b89840ea0d9fdf1f6abb45a940039aeb401ccee052994c0b514bbc858" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.356025 4937 scope.go:117] "RemoveContainer" containerID="bbb98caedd35369a4d01e929e5bd7647e08ac3de6771de5813a505162ac0cf2a" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.371483 4937 scope.go:117] "RemoveContainer" containerID="e0cebbed00ea87158a75cbf3bd36e8779a983dd303b4bf837cbf950c86202db4" Jan 23 06:39:15 crc kubenswrapper[4937]: I0123 06:39:15.391174 4937 scope.go:117] "RemoveContainer" containerID="6ddc27e9c0ae32be0e98701b084848dac66a8a2bad106a76e4f2c97fd6bce4a5" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.194828 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8mqth" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.392421 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lw92f"] Jan 23 06:39:16 crc kubenswrapper[4937]: E0123 06:39:16.393179 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda6ba11-61ab-4501-afa3-5bb654f352ea" containerName="marketplace-operator" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393209 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda6ba11-61ab-4501-afa3-5bb654f352ea" containerName="marketplace-operator" Jan 23 06:39:16 crc kubenswrapper[4937]: E0123 06:39:16.393227 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c5c12d-911e-4ba4-b4ac-2121b26efdcb" containerName="registry-server" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393241 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c5c12d-911e-4ba4-b4ac-2121b26efdcb" containerName="registry-server" Jan 23 06:39:16 crc kubenswrapper[4937]: E0123 06:39:16.393260 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c5c12d-911e-4ba4-b4ac-2121b26efdcb" containerName="extract-utilities" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393274 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c5c12d-911e-4ba4-b4ac-2121b26efdcb" containerName="extract-utilities" Jan 23 06:39:16 crc kubenswrapper[4937]: E0123 06:39:16.393289 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee63b0c9-aed6-4be4-9987-c27fe197911d" containerName="extract-utilities" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393302 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee63b0c9-aed6-4be4-9987-c27fe197911d" containerName="extract-utilities" Jan 23 06:39:16 crc kubenswrapper[4937]: E0123 06:39:16.393319 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c5c12d-911e-4ba4-b4ac-2121b26efdcb" containerName="extract-content" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393331 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c5c12d-911e-4ba4-b4ac-2121b26efdcb" containerName="extract-content" Jan 23 06:39:16 crc kubenswrapper[4937]: E0123 06:39:16.393345 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f13571-ed06-4bcc-825f-8bc5915ab5a7" containerName="extract-content" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393359 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f13571-ed06-4bcc-825f-8bc5915ab5a7" containerName="extract-content" Jan 23 06:39:16 crc kubenswrapper[4937]: E0123 06:39:16.393377 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5003ecc-816a-494a-b773-0edb552a55f3" containerName="extract-content" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393390 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5003ecc-816a-494a-b773-0edb552a55f3" containerName="extract-content" Jan 23 06:39:16 crc kubenswrapper[4937]: E0123 06:39:16.393405 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5003ecc-816a-494a-b773-0edb552a55f3" containerName="extract-utilities" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393417 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5003ecc-816a-494a-b773-0edb552a55f3" containerName="extract-utilities" Jan 23 06:39:16 crc kubenswrapper[4937]: E0123 06:39:16.393432 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee63b0c9-aed6-4be4-9987-c27fe197911d" containerName="extract-content" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393444 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee63b0c9-aed6-4be4-9987-c27fe197911d" containerName="extract-content" Jan 23 06:39:16 crc kubenswrapper[4937]: E0123 06:39:16.393467 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f13571-ed06-4bcc-825f-8bc5915ab5a7" containerName="registry-server" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393478 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f13571-ed06-4bcc-825f-8bc5915ab5a7" containerName="registry-server" Jan 23 06:39:16 crc kubenswrapper[4937]: E0123 06:39:16.393498 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f13571-ed06-4bcc-825f-8bc5915ab5a7" containerName="extract-utilities" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393511 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f13571-ed06-4bcc-825f-8bc5915ab5a7" containerName="extract-utilities" Jan 23 06:39:16 crc kubenswrapper[4937]: E0123 06:39:16.393525 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee63b0c9-aed6-4be4-9987-c27fe197911d" containerName="registry-server" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393537 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee63b0c9-aed6-4be4-9987-c27fe197911d" containerName="registry-server" Jan 23 06:39:16 crc kubenswrapper[4937]: E0123 06:39:16.393552 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda6ba11-61ab-4501-afa3-5bb654f352ea" containerName="marketplace-operator" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393564 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda6ba11-61ab-4501-afa3-5bb654f352ea" containerName="marketplace-operator" Jan 23 06:39:16 crc kubenswrapper[4937]: E0123 06:39:16.393582 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5003ecc-816a-494a-b773-0edb552a55f3" containerName="registry-server" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393625 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5003ecc-816a-494a-b773-0edb552a55f3" containerName="registry-server" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393788 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5003ecc-816a-494a-b773-0edb552a55f3" containerName="registry-server" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393814 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="dda6ba11-61ab-4501-afa3-5bb654f352ea" containerName="marketplace-operator" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393843 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee63b0c9-aed6-4be4-9987-c27fe197911d" containerName="registry-server" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393865 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="89c5c12d-911e-4ba4-b4ac-2121b26efdcb" containerName="registry-server" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393885 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="dda6ba11-61ab-4501-afa3-5bb654f352ea" containerName="marketplace-operator" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.393901 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8f13571-ed06-4bcc-825f-8bc5915ab5a7" containerName="registry-server" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.395193 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lw92f" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.397673 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.403949 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w7kq\" (UniqueName: \"kubernetes.io/projected/3700c64c-0a23-4247-aba5-3a4f6da806d3-kube-api-access-9w7kq\") pod \"redhat-operators-lw92f\" (UID: \"3700c64c-0a23-4247-aba5-3a4f6da806d3\") " pod="openshift-marketplace/redhat-operators-lw92f" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.404014 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3700c64c-0a23-4247-aba5-3a4f6da806d3-catalog-content\") pod \"redhat-operators-lw92f\" (UID: \"3700c64c-0a23-4247-aba5-3a4f6da806d3\") " pod="openshift-marketplace/redhat-operators-lw92f" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.404146 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3700c64c-0a23-4247-aba5-3a4f6da806d3-utilities\") pod \"redhat-operators-lw92f\" (UID: \"3700c64c-0a23-4247-aba5-3a4f6da806d3\") " pod="openshift-marketplace/redhat-operators-lw92f" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.414532 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lw92f"] Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.505681 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3700c64c-0a23-4247-aba5-3a4f6da806d3-catalog-content\") pod \"redhat-operators-lw92f\" (UID: \"3700c64c-0a23-4247-aba5-3a4f6da806d3\") " pod="openshift-marketplace/redhat-operators-lw92f" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.505837 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3700c64c-0a23-4247-aba5-3a4f6da806d3-utilities\") pod \"redhat-operators-lw92f\" (UID: \"3700c64c-0a23-4247-aba5-3a4f6da806d3\") " pod="openshift-marketplace/redhat-operators-lw92f" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.505874 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w7kq\" (UniqueName: \"kubernetes.io/projected/3700c64c-0a23-4247-aba5-3a4f6da806d3-kube-api-access-9w7kq\") pod \"redhat-operators-lw92f\" (UID: \"3700c64c-0a23-4247-aba5-3a4f6da806d3\") " pod="openshift-marketplace/redhat-operators-lw92f" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.506879 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3700c64c-0a23-4247-aba5-3a4f6da806d3-catalog-content\") pod \"redhat-operators-lw92f\" (UID: \"3700c64c-0a23-4247-aba5-3a4f6da806d3\") " pod="openshift-marketplace/redhat-operators-lw92f" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.507485 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3700c64c-0a23-4247-aba5-3a4f6da806d3-utilities\") pod \"redhat-operators-lw92f\" (UID: \"3700c64c-0a23-4247-aba5-3a4f6da806d3\") " pod="openshift-marketplace/redhat-operators-lw92f" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.535509 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w7kq\" (UniqueName: \"kubernetes.io/projected/3700c64c-0a23-4247-aba5-3a4f6da806d3-kube-api-access-9w7kq\") pod \"redhat-operators-lw92f\" (UID: \"3700c64c-0a23-4247-aba5-3a4f6da806d3\") " pod="openshift-marketplace/redhat-operators-lw92f" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.540788 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89c5c12d-911e-4ba4-b4ac-2121b26efdcb" path="/var/lib/kubelet/pods/89c5c12d-911e-4ba4-b4ac-2121b26efdcb/volumes" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.542181 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5003ecc-816a-494a-b773-0edb552a55f3" path="/var/lib/kubelet/pods/a5003ecc-816a-494a-b773-0edb552a55f3/volumes" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.543451 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dda6ba11-61ab-4501-afa3-5bb654f352ea" path="/var/lib/kubelet/pods/dda6ba11-61ab-4501-afa3-5bb654f352ea/volumes" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.545205 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee63b0c9-aed6-4be4-9987-c27fe197911d" path="/var/lib/kubelet/pods/ee63b0c9-aed6-4be4-9987-c27fe197911d/volumes" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.546349 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8f13571-ed06-4bcc-825f-8bc5915ab5a7" path="/var/lib/kubelet/pods/f8f13571-ed06-4bcc-825f-8bc5915ab5a7/volumes" Jan 23 06:39:16 crc kubenswrapper[4937]: I0123 06:39:16.723431 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lw92f" Jan 23 06:39:17 crc kubenswrapper[4937]: I0123 06:39:17.015007 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lw92f"] Jan 23 06:39:17 crc kubenswrapper[4937]: I0123 06:39:17.197245 4937 generic.go:334] "Generic (PLEG): container finished" podID="3700c64c-0a23-4247-aba5-3a4f6da806d3" containerID="6c8d66f860e2861bb458978dd23bbd2e783c65098b5bec19d4ba11bdde418a02" exitCode=0 Jan 23 06:39:17 crc kubenswrapper[4937]: I0123 06:39:17.197300 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lw92f" event={"ID":"3700c64c-0a23-4247-aba5-3a4f6da806d3","Type":"ContainerDied","Data":"6c8d66f860e2861bb458978dd23bbd2e783c65098b5bec19d4ba11bdde418a02"} Jan 23 06:39:17 crc kubenswrapper[4937]: I0123 06:39:17.197360 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lw92f" event={"ID":"3700c64c-0a23-4247-aba5-3a4f6da806d3","Type":"ContainerStarted","Data":"9f85841a12c36ca4ea3158296301fb905c2e1a5736d4e13f67177c6c78470689"} Jan 23 06:39:17 crc kubenswrapper[4937]: I0123 06:39:17.390664 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vt9wx"] Jan 23 06:39:17 crc kubenswrapper[4937]: I0123 06:39:17.393886 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vt9wx" Jan 23 06:39:17 crc kubenswrapper[4937]: I0123 06:39:17.398423 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 06:39:17 crc kubenswrapper[4937]: I0123 06:39:17.401419 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vt9wx"] Jan 23 06:39:17 crc kubenswrapper[4937]: I0123 06:39:17.424225 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed183f54-661c-4e72-9a2a-cd277c6119d1-utilities\") pod \"community-operators-vt9wx\" (UID: \"ed183f54-661c-4e72-9a2a-cd277c6119d1\") " pod="openshift-marketplace/community-operators-vt9wx" Jan 23 06:39:17 crc kubenswrapper[4937]: I0123 06:39:17.424259 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed183f54-661c-4e72-9a2a-cd277c6119d1-catalog-content\") pod \"community-operators-vt9wx\" (UID: \"ed183f54-661c-4e72-9a2a-cd277c6119d1\") " pod="openshift-marketplace/community-operators-vt9wx" Jan 23 06:39:17 crc kubenswrapper[4937]: I0123 06:39:17.424293 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chlfb\" (UniqueName: \"kubernetes.io/projected/ed183f54-661c-4e72-9a2a-cd277c6119d1-kube-api-access-chlfb\") pod \"community-operators-vt9wx\" (UID: \"ed183f54-661c-4e72-9a2a-cd277c6119d1\") " pod="openshift-marketplace/community-operators-vt9wx" Jan 23 06:39:17 crc kubenswrapper[4937]: I0123 06:39:17.525558 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed183f54-661c-4e72-9a2a-cd277c6119d1-utilities\") pod \"community-operators-vt9wx\" (UID: \"ed183f54-661c-4e72-9a2a-cd277c6119d1\") " pod="openshift-marketplace/community-operators-vt9wx" Jan 23 06:39:17 crc kubenswrapper[4937]: I0123 06:39:17.525845 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed183f54-661c-4e72-9a2a-cd277c6119d1-catalog-content\") pod \"community-operators-vt9wx\" (UID: \"ed183f54-661c-4e72-9a2a-cd277c6119d1\") " pod="openshift-marketplace/community-operators-vt9wx" Jan 23 06:39:17 crc kubenswrapper[4937]: I0123 06:39:17.525874 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chlfb\" (UniqueName: \"kubernetes.io/projected/ed183f54-661c-4e72-9a2a-cd277c6119d1-kube-api-access-chlfb\") pod \"community-operators-vt9wx\" (UID: \"ed183f54-661c-4e72-9a2a-cd277c6119d1\") " pod="openshift-marketplace/community-operators-vt9wx" Jan 23 06:39:17 crc kubenswrapper[4937]: I0123 06:39:17.526030 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed183f54-661c-4e72-9a2a-cd277c6119d1-utilities\") pod \"community-operators-vt9wx\" (UID: \"ed183f54-661c-4e72-9a2a-cd277c6119d1\") " pod="openshift-marketplace/community-operators-vt9wx" Jan 23 06:39:17 crc kubenswrapper[4937]: I0123 06:39:17.526257 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed183f54-661c-4e72-9a2a-cd277c6119d1-catalog-content\") pod \"community-operators-vt9wx\" (UID: \"ed183f54-661c-4e72-9a2a-cd277c6119d1\") " pod="openshift-marketplace/community-operators-vt9wx" Jan 23 06:39:17 crc kubenswrapper[4937]: I0123 06:39:17.548537 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chlfb\" (UniqueName: \"kubernetes.io/projected/ed183f54-661c-4e72-9a2a-cd277c6119d1-kube-api-access-chlfb\") pod \"community-operators-vt9wx\" (UID: \"ed183f54-661c-4e72-9a2a-cd277c6119d1\") " pod="openshift-marketplace/community-operators-vt9wx" Jan 23 06:39:17 crc kubenswrapper[4937]: I0123 06:39:17.730130 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vt9wx" Jan 23 06:39:17 crc kubenswrapper[4937]: I0123 06:39:17.934250 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vt9wx"] Jan 23 06:39:17 crc kubenswrapper[4937]: W0123 06:39:17.941826 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded183f54_661c_4e72_9a2a_cd277c6119d1.slice/crio-b4c28702f79c5476abad4d4a192badafe12d6f8a239d748975bedfb8185e05da WatchSource:0}: Error finding container b4c28702f79c5476abad4d4a192badafe12d6f8a239d748975bedfb8185e05da: Status 404 returned error can't find the container with id b4c28702f79c5476abad4d4a192badafe12d6f8a239d748975bedfb8185e05da Jan 23 06:39:18 crc kubenswrapper[4937]: I0123 06:39:18.405079 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vt9wx" event={"ID":"ed183f54-661c-4e72-9a2a-cd277c6119d1","Type":"ContainerStarted","Data":"b4c28702f79c5476abad4d4a192badafe12d6f8a239d748975bedfb8185e05da"} Jan 23 06:39:18 crc kubenswrapper[4937]: I0123 06:39:18.788463 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6m52s"] Jan 23 06:39:18 crc kubenswrapper[4937]: I0123 06:39:18.790337 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6m52s" Jan 23 06:39:18 crc kubenswrapper[4937]: I0123 06:39:18.792240 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 06:39:18 crc kubenswrapper[4937]: I0123 06:39:18.798722 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6m52s"] Jan 23 06:39:18 crc kubenswrapper[4937]: I0123 06:39:18.941934 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5379755c-affb-443d-ab53-50eaaf5b5324-utilities\") pod \"certified-operators-6m52s\" (UID: \"5379755c-affb-443d-ab53-50eaaf5b5324\") " pod="openshift-marketplace/certified-operators-6m52s" Jan 23 06:39:18 crc kubenswrapper[4937]: I0123 06:39:18.942099 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqwds\" (UniqueName: \"kubernetes.io/projected/5379755c-affb-443d-ab53-50eaaf5b5324-kube-api-access-kqwds\") pod \"certified-operators-6m52s\" (UID: \"5379755c-affb-443d-ab53-50eaaf5b5324\") " pod="openshift-marketplace/certified-operators-6m52s" Jan 23 06:39:18 crc kubenswrapper[4937]: I0123 06:39:18.942221 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5379755c-affb-443d-ab53-50eaaf5b5324-catalog-content\") pod \"certified-operators-6m52s\" (UID: \"5379755c-affb-443d-ab53-50eaaf5b5324\") " pod="openshift-marketplace/certified-operators-6m52s" Jan 23 06:39:19 crc kubenswrapper[4937]: I0123 06:39:19.043814 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqwds\" (UniqueName: \"kubernetes.io/projected/5379755c-affb-443d-ab53-50eaaf5b5324-kube-api-access-kqwds\") pod \"certified-operators-6m52s\" (UID: \"5379755c-affb-443d-ab53-50eaaf5b5324\") " pod="openshift-marketplace/certified-operators-6m52s" Jan 23 06:39:19 crc kubenswrapper[4937]: I0123 06:39:19.043867 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5379755c-affb-443d-ab53-50eaaf5b5324-catalog-content\") pod \"certified-operators-6m52s\" (UID: \"5379755c-affb-443d-ab53-50eaaf5b5324\") " pod="openshift-marketplace/certified-operators-6m52s" Jan 23 06:39:19 crc kubenswrapper[4937]: I0123 06:39:19.043920 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5379755c-affb-443d-ab53-50eaaf5b5324-utilities\") pod \"certified-operators-6m52s\" (UID: \"5379755c-affb-443d-ab53-50eaaf5b5324\") " pod="openshift-marketplace/certified-operators-6m52s" Jan 23 06:39:19 crc kubenswrapper[4937]: I0123 06:39:19.044309 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5379755c-affb-443d-ab53-50eaaf5b5324-utilities\") pod \"certified-operators-6m52s\" (UID: \"5379755c-affb-443d-ab53-50eaaf5b5324\") " pod="openshift-marketplace/certified-operators-6m52s" Jan 23 06:39:19 crc kubenswrapper[4937]: I0123 06:39:19.044397 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5379755c-affb-443d-ab53-50eaaf5b5324-catalog-content\") pod \"certified-operators-6m52s\" (UID: \"5379755c-affb-443d-ab53-50eaaf5b5324\") " pod="openshift-marketplace/certified-operators-6m52s" Jan 23 06:39:19 crc kubenswrapper[4937]: I0123 06:39:19.065504 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqwds\" (UniqueName: \"kubernetes.io/projected/5379755c-affb-443d-ab53-50eaaf5b5324-kube-api-access-kqwds\") pod \"certified-operators-6m52s\" (UID: \"5379755c-affb-443d-ab53-50eaaf5b5324\") " pod="openshift-marketplace/certified-operators-6m52s" Jan 23 06:39:19 crc kubenswrapper[4937]: I0123 06:39:19.108149 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6m52s" Jan 23 06:39:19 crc kubenswrapper[4937]: I0123 06:39:19.414438 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lw92f" event={"ID":"3700c64c-0a23-4247-aba5-3a4f6da806d3","Type":"ContainerDied","Data":"d83a087580430854f78f148b06418d2270d289c1517736d29dbaff2a0b6465b9"} Jan 23 06:39:19 crc kubenswrapper[4937]: I0123 06:39:19.415194 4937 generic.go:334] "Generic (PLEG): container finished" podID="3700c64c-0a23-4247-aba5-3a4f6da806d3" containerID="d83a087580430854f78f148b06418d2270d289c1517736d29dbaff2a0b6465b9" exitCode=0 Jan 23 06:39:19 crc kubenswrapper[4937]: I0123 06:39:19.418777 4937 generic.go:334] "Generic (PLEG): container finished" podID="ed183f54-661c-4e72-9a2a-cd277c6119d1" containerID="9aad938ec09a0c35de1747a452841cd82413d95a91099b8ac4dd713ab5ce2174" exitCode=0 Jan 23 06:39:19 crc kubenswrapper[4937]: I0123 06:39:19.418812 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vt9wx" event={"ID":"ed183f54-661c-4e72-9a2a-cd277c6119d1","Type":"ContainerDied","Data":"9aad938ec09a0c35de1747a452841cd82413d95a91099b8ac4dd713ab5ce2174"} Jan 23 06:39:19 crc kubenswrapper[4937]: I0123 06:39:19.526249 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6m52s"] Jan 23 06:39:19 crc kubenswrapper[4937]: I0123 06:39:19.796145 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ckzjh"] Jan 23 06:39:19 crc kubenswrapper[4937]: I0123 06:39:19.797254 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ckzjh" Jan 23 06:39:19 crc kubenswrapper[4937]: I0123 06:39:19.799429 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 06:39:19 crc kubenswrapper[4937]: I0123 06:39:19.801459 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckzjh"] Jan 23 06:39:19 crc kubenswrapper[4937]: I0123 06:39:19.960148 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170a9df3-c6b9-4ec1-abc1-098640c265c8-utilities\") pod \"redhat-marketplace-ckzjh\" (UID: \"170a9df3-c6b9-4ec1-abc1-098640c265c8\") " pod="openshift-marketplace/redhat-marketplace-ckzjh" Jan 23 06:39:19 crc kubenswrapper[4937]: I0123 06:39:19.960649 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkkd4\" (UniqueName: \"kubernetes.io/projected/170a9df3-c6b9-4ec1-abc1-098640c265c8-kube-api-access-pkkd4\") pod \"redhat-marketplace-ckzjh\" (UID: \"170a9df3-c6b9-4ec1-abc1-098640c265c8\") " pod="openshift-marketplace/redhat-marketplace-ckzjh" Jan 23 06:39:19 crc kubenswrapper[4937]: I0123 06:39:19.960713 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170a9df3-c6b9-4ec1-abc1-098640c265c8-catalog-content\") pod \"redhat-marketplace-ckzjh\" (UID: \"170a9df3-c6b9-4ec1-abc1-098640c265c8\") " pod="openshift-marketplace/redhat-marketplace-ckzjh" Jan 23 06:39:20 crc kubenswrapper[4937]: I0123 06:39:20.062072 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170a9df3-c6b9-4ec1-abc1-098640c265c8-utilities\") pod \"redhat-marketplace-ckzjh\" (UID: \"170a9df3-c6b9-4ec1-abc1-098640c265c8\") " pod="openshift-marketplace/redhat-marketplace-ckzjh" Jan 23 06:39:20 crc kubenswrapper[4937]: I0123 06:39:20.062152 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkkd4\" (UniqueName: \"kubernetes.io/projected/170a9df3-c6b9-4ec1-abc1-098640c265c8-kube-api-access-pkkd4\") pod \"redhat-marketplace-ckzjh\" (UID: \"170a9df3-c6b9-4ec1-abc1-098640c265c8\") " pod="openshift-marketplace/redhat-marketplace-ckzjh" Jan 23 06:39:20 crc kubenswrapper[4937]: I0123 06:39:20.062193 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170a9df3-c6b9-4ec1-abc1-098640c265c8-catalog-content\") pod \"redhat-marketplace-ckzjh\" (UID: \"170a9df3-c6b9-4ec1-abc1-098640c265c8\") " pod="openshift-marketplace/redhat-marketplace-ckzjh" Jan 23 06:39:20 crc kubenswrapper[4937]: I0123 06:39:20.062676 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170a9df3-c6b9-4ec1-abc1-098640c265c8-utilities\") pod \"redhat-marketplace-ckzjh\" (UID: \"170a9df3-c6b9-4ec1-abc1-098640c265c8\") " pod="openshift-marketplace/redhat-marketplace-ckzjh" Jan 23 06:39:20 crc kubenswrapper[4937]: I0123 06:39:20.062784 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170a9df3-c6b9-4ec1-abc1-098640c265c8-catalog-content\") pod \"redhat-marketplace-ckzjh\" (UID: \"170a9df3-c6b9-4ec1-abc1-098640c265c8\") " pod="openshift-marketplace/redhat-marketplace-ckzjh" Jan 23 06:39:20 crc kubenswrapper[4937]: I0123 06:39:20.083687 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkkd4\" (UniqueName: \"kubernetes.io/projected/170a9df3-c6b9-4ec1-abc1-098640c265c8-kube-api-access-pkkd4\") pod \"redhat-marketplace-ckzjh\" (UID: \"170a9df3-c6b9-4ec1-abc1-098640c265c8\") " pod="openshift-marketplace/redhat-marketplace-ckzjh" Jan 23 06:39:20 crc kubenswrapper[4937]: I0123 06:39:20.121950 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ckzjh" Jan 23 06:39:20 crc kubenswrapper[4937]: I0123 06:39:20.331680 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ckzjh"] Jan 23 06:39:20 crc kubenswrapper[4937]: W0123 06:39:20.336123 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod170a9df3_c6b9_4ec1_abc1_098640c265c8.slice/crio-68e278b07c0662df6260744de1b5e396dc8173712509c76d452d11bc92fe37ac WatchSource:0}: Error finding container 68e278b07c0662df6260744de1b5e396dc8173712509c76d452d11bc92fe37ac: Status 404 returned error can't find the container with id 68e278b07c0662df6260744de1b5e396dc8173712509c76d452d11bc92fe37ac Jan 23 06:39:20 crc kubenswrapper[4937]: I0123 06:39:20.432728 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckzjh" event={"ID":"170a9df3-c6b9-4ec1-abc1-098640c265c8","Type":"ContainerStarted","Data":"68e278b07c0662df6260744de1b5e396dc8173712509c76d452d11bc92fe37ac"} Jan 23 06:39:20 crc kubenswrapper[4937]: I0123 06:39:20.436092 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lw92f" event={"ID":"3700c64c-0a23-4247-aba5-3a4f6da806d3","Type":"ContainerStarted","Data":"460bbd0c02f32c0f188dadb913b69fe5212a774fd946be95fbc8e9a37806f981"} Jan 23 06:39:20 crc kubenswrapper[4937]: I0123 06:39:20.439865 4937 generic.go:334] "Generic (PLEG): container finished" podID="5379755c-affb-443d-ab53-50eaaf5b5324" containerID="d800ba5522a93c71e81d7cd00a0332dbaa04264144ea340130d47c0522b55b73" exitCode=0 Jan 23 06:39:20 crc kubenswrapper[4937]: I0123 06:39:20.439919 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6m52s" event={"ID":"5379755c-affb-443d-ab53-50eaaf5b5324","Type":"ContainerDied","Data":"d800ba5522a93c71e81d7cd00a0332dbaa04264144ea340130d47c0522b55b73"} Jan 23 06:39:20 crc kubenswrapper[4937]: I0123 06:39:20.439947 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6m52s" event={"ID":"5379755c-affb-443d-ab53-50eaaf5b5324","Type":"ContainerStarted","Data":"19706878f909df9a987112edcc39695e3e35c56d4cb5e68ccbeb848acd6c0c04"} Jan 23 06:39:20 crc kubenswrapper[4937]: I0123 06:39:20.480276 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lw92f" podStartSLOduration=1.7559446090000002 podStartE2EDuration="4.480250895s" podCreationTimestamp="2026-01-23 06:39:16 +0000 UTC" firstStartedPulling="2026-01-23 06:39:17.19857571 +0000 UTC m=+357.002342373" lastFinishedPulling="2026-01-23 06:39:19.922882006 +0000 UTC m=+359.726648659" observedRunningTime="2026-01-23 06:39:20.454785284 +0000 UTC m=+360.258551947" watchObservedRunningTime="2026-01-23 06:39:20.480250895 +0000 UTC m=+360.284017548" Jan 23 06:39:21 crc kubenswrapper[4937]: I0123 06:39:21.449600 4937 generic.go:334] "Generic (PLEG): container finished" podID="ed183f54-661c-4e72-9a2a-cd277c6119d1" containerID="e37cddd8f941b6e2c83aa424d82bb51606373919cb9fec5c7f34e6cd62dae557" exitCode=0 Jan 23 06:39:21 crc kubenswrapper[4937]: I0123 06:39:21.449743 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vt9wx" event={"ID":"ed183f54-661c-4e72-9a2a-cd277c6119d1","Type":"ContainerDied","Data":"e37cddd8f941b6e2c83aa424d82bb51606373919cb9fec5c7f34e6cd62dae557"} Jan 23 06:39:21 crc kubenswrapper[4937]: I0123 06:39:21.452788 4937 generic.go:334] "Generic (PLEG): container finished" podID="170a9df3-c6b9-4ec1-abc1-098640c265c8" containerID="87c03423acf7fb4c1ab1a962dc530702806b555a770cb8f1555a26d24ca53500" exitCode=0 Jan 23 06:39:21 crc kubenswrapper[4937]: I0123 06:39:21.452976 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckzjh" event={"ID":"170a9df3-c6b9-4ec1-abc1-098640c265c8","Type":"ContainerDied","Data":"87c03423acf7fb4c1ab1a962dc530702806b555a770cb8f1555a26d24ca53500"} Jan 23 06:39:22 crc kubenswrapper[4937]: I0123 06:39:22.459954 4937 generic.go:334] "Generic (PLEG): container finished" podID="170a9df3-c6b9-4ec1-abc1-098640c265c8" containerID="5372834508dbb79ec887d7899b9117966ddadb108ae9fad0a8b99734f1cb2c3f" exitCode=0 Jan 23 06:39:22 crc kubenswrapper[4937]: I0123 06:39:22.460079 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckzjh" event={"ID":"170a9df3-c6b9-4ec1-abc1-098640c265c8","Type":"ContainerDied","Data":"5372834508dbb79ec887d7899b9117966ddadb108ae9fad0a8b99734f1cb2c3f"} Jan 23 06:39:22 crc kubenswrapper[4937]: I0123 06:39:22.464060 4937 generic.go:334] "Generic (PLEG): container finished" podID="5379755c-affb-443d-ab53-50eaaf5b5324" containerID="9658a2111a79cd1c75e702f6e09c9f699a994b0fdc669567fd651f0533d64e31" exitCode=0 Jan 23 06:39:22 crc kubenswrapper[4937]: I0123 06:39:22.464101 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6m52s" event={"ID":"5379755c-affb-443d-ab53-50eaaf5b5324","Type":"ContainerDied","Data":"9658a2111a79cd1c75e702f6e09c9f699a994b0fdc669567fd651f0533d64e31"} Jan 23 06:39:22 crc kubenswrapper[4937]: I0123 06:39:22.466721 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vt9wx" event={"ID":"ed183f54-661c-4e72-9a2a-cd277c6119d1","Type":"ContainerStarted","Data":"11236fcac1f0e386ac353aa398ce9f5d94242f9ab6d9ad06823b55d8381a2b3b"} Jan 23 06:39:22 crc kubenswrapper[4937]: I0123 06:39:22.561715 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vt9wx" podStartSLOduration=3.001392879 podStartE2EDuration="5.561701341s" podCreationTimestamp="2026-01-23 06:39:17 +0000 UTC" firstStartedPulling="2026-01-23 06:39:19.422923637 +0000 UTC m=+359.226690290" lastFinishedPulling="2026-01-23 06:39:21.983232099 +0000 UTC m=+361.786998752" observedRunningTime="2026-01-23 06:39:22.505953767 +0000 UTC m=+362.309720420" watchObservedRunningTime="2026-01-23 06:39:22.561701341 +0000 UTC m=+362.365467994" Jan 23 06:39:23 crc kubenswrapper[4937]: I0123 06:39:23.474811 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6m52s" event={"ID":"5379755c-affb-443d-ab53-50eaaf5b5324","Type":"ContainerStarted","Data":"cdae95b84d6b29249276ce0776e1e36a7a6abc68c4a5b6aa4876d49f2a53f308"} Jan 23 06:39:23 crc kubenswrapper[4937]: I0123 06:39:23.476873 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ckzjh" event={"ID":"170a9df3-c6b9-4ec1-abc1-098640c265c8","Type":"ContainerStarted","Data":"a06c79a06879c82a8da2678f1aba56fa363812a400427210e332c2c367607814"} Jan 23 06:39:23 crc kubenswrapper[4937]: I0123 06:39:23.497740 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6m52s" podStartSLOduration=2.898463424 podStartE2EDuration="5.497723904s" podCreationTimestamp="2026-01-23 06:39:18 +0000 UTC" firstStartedPulling="2026-01-23 06:39:20.442242813 +0000 UTC m=+360.246009486" lastFinishedPulling="2026-01-23 06:39:23.041503313 +0000 UTC m=+362.845269966" observedRunningTime="2026-01-23 06:39:23.494059945 +0000 UTC m=+363.297826598" watchObservedRunningTime="2026-01-23 06:39:23.497723904 +0000 UTC m=+363.301490557" Jan 23 06:39:23 crc kubenswrapper[4937]: I0123 06:39:23.515907 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ckzjh" podStartSLOduration=2.96080913 podStartE2EDuration="4.515891548s" podCreationTimestamp="2026-01-23 06:39:19 +0000 UTC" firstStartedPulling="2026-01-23 06:39:21.45825814 +0000 UTC m=+361.262024793" lastFinishedPulling="2026-01-23 06:39:23.013340538 +0000 UTC m=+362.817107211" observedRunningTime="2026-01-23 06:39:23.513663828 +0000 UTC m=+363.317430481" watchObservedRunningTime="2026-01-23 06:39:23.515891548 +0000 UTC m=+363.319658201" Jan 23 06:39:25 crc kubenswrapper[4937]: I0123 06:39:25.150560 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58498d946f-zh2tv"] Jan 23 06:39:25 crc kubenswrapper[4937]: I0123 06:39:25.151287 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" podUID="b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b" containerName="controller-manager" containerID="cri-o://b57d16daf2958da4f9ad82be99879d812cb051e55f63d9a5ac90ef484ca916aa" gracePeriod=30 Jan 23 06:39:26 crc kubenswrapper[4937]: I0123 06:39:26.491349 4937 generic.go:334] "Generic (PLEG): container finished" podID="b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b" containerID="b57d16daf2958da4f9ad82be99879d812cb051e55f63d9a5ac90ef484ca916aa" exitCode=0 Jan 23 06:39:26 crc kubenswrapper[4937]: I0123 06:39:26.491449 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" event={"ID":"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b","Type":"ContainerDied","Data":"b57d16daf2958da4f9ad82be99879d812cb051e55f63d9a5ac90ef484ca916aa"} Jan 23 06:39:26 crc kubenswrapper[4937]: I0123 06:39:26.724105 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lw92f" Jan 23 06:39:26 crc kubenswrapper[4937]: I0123 06:39:26.724192 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lw92f" Jan 23 06:39:26 crc kubenswrapper[4937]: I0123 06:39:26.822393 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lw92f" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.047277 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.077817 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58496d6896-7pzfh"] Jan 23 06:39:27 crc kubenswrapper[4937]: E0123 06:39:27.078089 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b" containerName="controller-manager" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.078110 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b" containerName="controller-manager" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.078319 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b" containerName="controller-manager" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.078849 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.088045 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58496d6896-7pzfh"] Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.148228 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-proxy-ca-bundles\") pod \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.148301 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-serving-cert\") pod \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.148361 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-config\") pod \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.148403 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-client-ca\") pod \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.148464 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z6w2\" (UniqueName: \"kubernetes.io/projected/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-kube-api-access-8z6w2\") pod \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\" (UID: \"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b\") " Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.148629 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61e3d1bf-b671-4345-a855-4981f5061ebd-config\") pod \"controller-manager-58496d6896-7pzfh\" (UID: \"61e3d1bf-b671-4345-a855-4981f5061ebd\") " pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.148688 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61e3d1bf-b671-4345-a855-4981f5061ebd-serving-cert\") pod \"controller-manager-58496d6896-7pzfh\" (UID: \"61e3d1bf-b671-4345-a855-4981f5061ebd\") " pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.148711 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61e3d1bf-b671-4345-a855-4981f5061ebd-proxy-ca-bundles\") pod \"controller-manager-58496d6896-7pzfh\" (UID: \"61e3d1bf-b671-4345-a855-4981f5061ebd\") " pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.148727 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61e3d1bf-b671-4345-a855-4981f5061ebd-client-ca\") pod \"controller-manager-58496d6896-7pzfh\" (UID: \"61e3d1bf-b671-4345-a855-4981f5061ebd\") " pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.148748 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnrmz\" (UniqueName: \"kubernetes.io/projected/61e3d1bf-b671-4345-a855-4981f5061ebd-kube-api-access-vnrmz\") pod \"controller-manager-58496d6896-7pzfh\" (UID: \"61e3d1bf-b671-4345-a855-4981f5061ebd\") " pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.149354 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b" (UID: "b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.149905 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-client-ca" (OuterVolumeSpecName: "client-ca") pod "b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b" (UID: "b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.150209 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-config" (OuterVolumeSpecName: "config") pod "b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b" (UID: "b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.154707 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b" (UID: "b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.156623 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-kube-api-access-8z6w2" (OuterVolumeSpecName: "kube-api-access-8z6w2") pod "b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b" (UID: "b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b"). InnerVolumeSpecName "kube-api-access-8z6w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.249589 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61e3d1bf-b671-4345-a855-4981f5061ebd-proxy-ca-bundles\") pod \"controller-manager-58496d6896-7pzfh\" (UID: \"61e3d1bf-b671-4345-a855-4981f5061ebd\") " pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.249684 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61e3d1bf-b671-4345-a855-4981f5061ebd-client-ca\") pod \"controller-manager-58496d6896-7pzfh\" (UID: \"61e3d1bf-b671-4345-a855-4981f5061ebd\") " pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.249731 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnrmz\" (UniqueName: \"kubernetes.io/projected/61e3d1bf-b671-4345-a855-4981f5061ebd-kube-api-access-vnrmz\") pod \"controller-manager-58496d6896-7pzfh\" (UID: \"61e3d1bf-b671-4345-a855-4981f5061ebd\") " pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.249801 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61e3d1bf-b671-4345-a855-4981f5061ebd-config\") pod \"controller-manager-58496d6896-7pzfh\" (UID: \"61e3d1bf-b671-4345-a855-4981f5061ebd\") " pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.249902 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61e3d1bf-b671-4345-a855-4981f5061ebd-serving-cert\") pod \"controller-manager-58496d6896-7pzfh\" (UID: \"61e3d1bf-b671-4345-a855-4981f5061ebd\") " pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.249957 4937 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.249977 4937 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.249996 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.250015 4937 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.250034 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z6w2\" (UniqueName: \"kubernetes.io/projected/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b-kube-api-access-8z6w2\") on node \"crc\" DevicePath \"\"" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.251586 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61e3d1bf-b671-4345-a855-4981f5061ebd-proxy-ca-bundles\") pod \"controller-manager-58496d6896-7pzfh\" (UID: \"61e3d1bf-b671-4345-a855-4981f5061ebd\") " pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.251856 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61e3d1bf-b671-4345-a855-4981f5061ebd-client-ca\") pod \"controller-manager-58496d6896-7pzfh\" (UID: \"61e3d1bf-b671-4345-a855-4981f5061ebd\") " pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.252799 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61e3d1bf-b671-4345-a855-4981f5061ebd-config\") pod \"controller-manager-58496d6896-7pzfh\" (UID: \"61e3d1bf-b671-4345-a855-4981f5061ebd\") " pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.261084 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61e3d1bf-b671-4345-a855-4981f5061ebd-serving-cert\") pod \"controller-manager-58496d6896-7pzfh\" (UID: \"61e3d1bf-b671-4345-a855-4981f5061ebd\") " pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.272897 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnrmz\" (UniqueName: \"kubernetes.io/projected/61e3d1bf-b671-4345-a855-4981f5061ebd-kube-api-access-vnrmz\") pod \"controller-manager-58496d6896-7pzfh\" (UID: \"61e3d1bf-b671-4345-a855-4981f5061ebd\") " pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.402752 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.511928 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" event={"ID":"b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b","Type":"ContainerDied","Data":"1f045e31d1faa1b7f0b1c8ebd2f9cc52c5f2c36cbc1e6350e20ce389416ae33c"} Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.512328 4937 scope.go:117] "RemoveContainer" containerID="b57d16daf2958da4f9ad82be99879d812cb051e55f63d9a5ac90ef484ca916aa" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.511950 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58498d946f-zh2tv" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.595978 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58498d946f-zh2tv"] Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.596439 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lw92f" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.601093 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-58498d946f-zh2tv"] Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.731106 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vt9wx" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.731198 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vt9wx" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.795787 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vt9wx" Jan 23 06:39:27 crc kubenswrapper[4937]: I0123 06:39:27.922540 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58496d6896-7pzfh"] Jan 23 06:39:27 crc kubenswrapper[4937]: W0123 06:39:27.931717 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61e3d1bf_b671_4345_a855_4981f5061ebd.slice/crio-e7189e53c11152e0d0ef09521718716f00157334a96cfca436bed803735f4803 WatchSource:0}: Error finding container e7189e53c11152e0d0ef09521718716f00157334a96cfca436bed803735f4803: Status 404 returned error can't find the container with id e7189e53c11152e0d0ef09521718716f00157334a96cfca436bed803735f4803 Jan 23 06:39:28 crc kubenswrapper[4937]: I0123 06:39:28.517773 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" event={"ID":"61e3d1bf-b671-4345-a855-4981f5061ebd","Type":"ContainerStarted","Data":"432070a434dea304e25a3ed107e0a150ceae1d770fcf71f0d95b742cc7bffe92"} Jan 23 06:39:28 crc kubenswrapper[4937]: I0123 06:39:28.518221 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" event={"ID":"61e3d1bf-b671-4345-a855-4981f5061ebd","Type":"ContainerStarted","Data":"e7189e53c11152e0d0ef09521718716f00157334a96cfca436bed803735f4803"} Jan 23 06:39:28 crc kubenswrapper[4937]: I0123 06:39:28.518252 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" Jan 23 06:39:28 crc kubenswrapper[4937]: I0123 06:39:28.541460 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b" path="/var/lib/kubelet/pods/b99f7cd3-f9d5-446c-b07f-b4d1efb1cd5b/volumes" Jan 23 06:39:28 crc kubenswrapper[4937]: I0123 06:39:28.542167 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" Jan 23 06:39:28 crc kubenswrapper[4937]: I0123 06:39:28.546157 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58496d6896-7pzfh" podStartSLOduration=3.546145377 podStartE2EDuration="3.546145377s" podCreationTimestamp="2026-01-23 06:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:39:28.545259703 +0000 UTC m=+368.349026356" watchObservedRunningTime="2026-01-23 06:39:28.546145377 +0000 UTC m=+368.349912030" Jan 23 06:39:28 crc kubenswrapper[4937]: I0123 06:39:28.587042 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vt9wx" Jan 23 06:39:29 crc kubenswrapper[4937]: I0123 06:39:29.108283 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6m52s" Jan 23 06:39:29 crc kubenswrapper[4937]: I0123 06:39:29.108350 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6m52s" Jan 23 06:39:29 crc kubenswrapper[4937]: I0123 06:39:29.149921 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6m52s" Jan 23 06:39:29 crc kubenswrapper[4937]: I0123 06:39:29.589258 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6m52s" Jan 23 06:39:30 crc kubenswrapper[4937]: I0123 06:39:30.122842 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ckzjh" Jan 23 06:39:30 crc kubenswrapper[4937]: I0123 06:39:30.123343 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ckzjh" Jan 23 06:39:30 crc kubenswrapper[4937]: I0123 06:39:30.167693 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ckzjh" Jan 23 06:39:30 crc kubenswrapper[4937]: I0123 06:39:30.576830 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ckzjh" Jan 23 06:39:37 crc kubenswrapper[4937]: I0123 06:39:37.723785 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:39:37 crc kubenswrapper[4937]: I0123 06:39:37.724499 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:40:07 crc kubenswrapper[4937]: I0123 06:40:07.723840 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:40:07 crc kubenswrapper[4937]: I0123 06:40:07.724480 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:40:37 crc kubenswrapper[4937]: I0123 06:40:37.724336 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:40:37 crc kubenswrapper[4937]: I0123 06:40:37.725481 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:40:37 crc kubenswrapper[4937]: I0123 06:40:37.725586 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:40:37 crc kubenswrapper[4937]: I0123 06:40:37.727135 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c3fa8299bd0b6b9a8218ede66a23b2f4276f22508c1f048b3346b7c7147e467"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 06:40:37 crc kubenswrapper[4937]: I0123 06:40:37.727249 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://0c3fa8299bd0b6b9a8218ede66a23b2f4276f22508c1f048b3346b7c7147e467" gracePeriod=600 Jan 23 06:40:37 crc kubenswrapper[4937]: I0123 06:40:37.986620 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="0c3fa8299bd0b6b9a8218ede66a23b2f4276f22508c1f048b3346b7c7147e467" exitCode=0 Jan 23 06:40:37 crc kubenswrapper[4937]: I0123 06:40:37.986770 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"0c3fa8299bd0b6b9a8218ede66a23b2f4276f22508c1f048b3346b7c7147e467"} Jan 23 06:40:37 crc kubenswrapper[4937]: I0123 06:40:37.986986 4937 scope.go:117] "RemoveContainer" containerID="973ab07811284978d9d7c9fa117682a163b455018a54b9dccf110eb76487584d" Jan 23 06:40:38 crc kubenswrapper[4937]: I0123 06:40:38.996050 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"7d3631879897790e7de1452e56edfc15429f364a23b8cbc48fd77e98ca63206e"} Jan 23 06:42:21 crc kubenswrapper[4937]: I0123 06:42:21.073772 4937 scope.go:117] "RemoveContainer" containerID="bb0d45891f596d84f61ea2184c440c76a97a2fbde591a25475495be9b0c41a0c" Jan 23 06:43:07 crc kubenswrapper[4937]: I0123 06:43:07.724096 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:43:07 crc kubenswrapper[4937]: I0123 06:43:07.724904 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:43:21 crc kubenswrapper[4937]: I0123 06:43:21.113017 4937 scope.go:117] "RemoveContainer" containerID="968b9be7387cb8cbb8e1042436e2b5e47e5c3a093505abb8e66e3cbfc36bcd21" Jan 23 06:43:21 crc kubenswrapper[4937]: I0123 06:43:21.132519 4937 scope.go:117] "RemoveContainer" containerID="5dfd2696ad5f690db6c8fee0a5fbf34c968b08ed8dcee92816f584e0542b6752" Jan 23 06:43:37 crc kubenswrapper[4937]: I0123 06:43:37.734763 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:43:37 crc kubenswrapper[4937]: I0123 06:43:37.735524 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:44:07 crc kubenswrapper[4937]: I0123 06:44:07.724256 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:44:07 crc kubenswrapper[4937]: I0123 06:44:07.725199 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:44:07 crc kubenswrapper[4937]: I0123 06:44:07.725289 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:44:07 crc kubenswrapper[4937]: I0123 06:44:07.726567 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d3631879897790e7de1452e56edfc15429f364a23b8cbc48fd77e98ca63206e"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 06:44:07 crc kubenswrapper[4937]: I0123 06:44:07.726711 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://7d3631879897790e7de1452e56edfc15429f364a23b8cbc48fd77e98ca63206e" gracePeriod=600 Jan 23 06:44:08 crc kubenswrapper[4937]: I0123 06:44:08.412195 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="7d3631879897790e7de1452e56edfc15429f364a23b8cbc48fd77e98ca63206e" exitCode=0 Jan 23 06:44:08 crc kubenswrapper[4937]: I0123 06:44:08.412255 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"7d3631879897790e7de1452e56edfc15429f364a23b8cbc48fd77e98ca63206e"} Jan 23 06:44:08 crc kubenswrapper[4937]: I0123 06:44:08.412455 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"998b72db21c32d780c7acaf4315ccfdf0bdabc15b6ed9c145723fd20240726ab"} Jan 23 06:44:08 crc kubenswrapper[4937]: I0123 06:44:08.412507 4937 scope.go:117] "RemoveContainer" containerID="0c3fa8299bd0b6b9a8218ede66a23b2f4276f22508c1f048b3346b7c7147e467" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.316306 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9nkvp"] Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.317942 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.346074 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9nkvp"] Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.407252 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4fc6b331-7b97-470e-b9d0-8650fd658f16-registry-certificates\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.407335 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4fc6b331-7b97-470e-b9d0-8650fd658f16-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.407372 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4fc6b331-7b97-470e-b9d0-8650fd658f16-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.407534 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqbdk\" (UniqueName: \"kubernetes.io/projected/4fc6b331-7b97-470e-b9d0-8650fd658f16-kube-api-access-vqbdk\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.407631 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4fc6b331-7b97-470e-b9d0-8650fd658f16-trusted-ca\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.407688 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.407737 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4fc6b331-7b97-470e-b9d0-8650fd658f16-registry-tls\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.407791 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4fc6b331-7b97-470e-b9d0-8650fd658f16-bound-sa-token\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.437188 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.509645 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4fc6b331-7b97-470e-b9d0-8650fd658f16-registry-certificates\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.509748 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4fc6b331-7b97-470e-b9d0-8650fd658f16-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.509775 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4fc6b331-7b97-470e-b9d0-8650fd658f16-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.509821 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqbdk\" (UniqueName: \"kubernetes.io/projected/4fc6b331-7b97-470e-b9d0-8650fd658f16-kube-api-access-vqbdk\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.509847 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4fc6b331-7b97-470e-b9d0-8650fd658f16-trusted-ca\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.509873 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4fc6b331-7b97-470e-b9d0-8650fd658f16-registry-tls\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.509922 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4fc6b331-7b97-470e-b9d0-8650fd658f16-bound-sa-token\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.511232 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4fc6b331-7b97-470e-b9d0-8650fd658f16-registry-certificates\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.512292 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4fc6b331-7b97-470e-b9d0-8650fd658f16-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.514259 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4fc6b331-7b97-470e-b9d0-8650fd658f16-trusted-ca\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.520302 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4fc6b331-7b97-470e-b9d0-8650fd658f16-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.521247 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4fc6b331-7b97-470e-b9d0-8650fd658f16-registry-tls\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.532802 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4fc6b331-7b97-470e-b9d0-8650fd658f16-bound-sa-token\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.533748 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqbdk\" (UniqueName: \"kubernetes.io/projected/4fc6b331-7b97-470e-b9d0-8650fd658f16-kube-api-access-vqbdk\") pod \"image-registry-66df7c8f76-9nkvp\" (UID: \"4fc6b331-7b97-470e-b9d0-8650fd658f16\") " pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.637314 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:12 crc kubenswrapper[4937]: I0123 06:44:12.869374 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9nkvp"] Jan 23 06:44:13 crc kubenswrapper[4937]: I0123 06:44:13.461067 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" event={"ID":"4fc6b331-7b97-470e-b9d0-8650fd658f16","Type":"ContainerStarted","Data":"d879861a0d3ddfa96a20e0b34460fb336b8025b1b04a59df4a8fd7d03fef5986"} Jan 23 06:44:13 crc kubenswrapper[4937]: I0123 06:44:13.461569 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:13 crc kubenswrapper[4937]: I0123 06:44:13.461632 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" event={"ID":"4fc6b331-7b97-470e-b9d0-8650fd658f16","Type":"ContainerStarted","Data":"f15322a5441a29d8dc6c59be8ec8e4de51be28796633521002ee12d533bc5ba5"} Jan 23 06:44:13 crc kubenswrapper[4937]: I0123 06:44:13.489839 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" podStartSLOduration=1.489818574 podStartE2EDuration="1.489818574s" podCreationTimestamp="2026-01-23 06:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:44:13.486165926 +0000 UTC m=+653.289932589" watchObservedRunningTime="2026-01-23 06:44:13.489818574 +0000 UTC m=+653.293585237" Jan 23 06:44:21 crc kubenswrapper[4937]: I0123 06:44:21.168300 4937 scope.go:117] "RemoveContainer" containerID="16d213b325fd1d5d13835b0a08e10029b0c1fb4bc4c99d4ca4606e9f8ef14f9f" Jan 23 06:44:32 crc kubenswrapper[4937]: I0123 06:44:32.649727 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-9nkvp" Jan 23 06:44:32 crc kubenswrapper[4937]: I0123 06:44:32.732036 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lsjwm"] Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.407637 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8vpjz"] Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.408504 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8vpjz" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.412556 4937 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-7hb5w" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.412648 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.412917 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.414011 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-6frw2"] Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.415780 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-6frw2" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.418254 4937 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-htb6b" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.428792 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8vpjz"] Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.446510 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-6frw2"] Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.450373 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-96ttx"] Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.451093 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-96ttx" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.452817 4937 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-s2hn4" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.460296 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-96ttx"] Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.481154 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6zz7\" (UniqueName: \"kubernetes.io/projected/a2720484-f07d-45fe-8acd-54191c11123f-kube-api-access-l6zz7\") pod \"cert-manager-858654f9db-6frw2\" (UID: \"a2720484-f07d-45fe-8acd-54191c11123f\") " pod="cert-manager/cert-manager-858654f9db-6frw2" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.481235 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wc5m\" (UniqueName: \"kubernetes.io/projected/2a7bd38b-fde5-4a38-bae8-c72a44172d4e-kube-api-access-2wc5m\") pod \"cert-manager-cainjector-cf98fcc89-8vpjz\" (UID: \"2a7bd38b-fde5-4a38-bae8-c72a44172d4e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8vpjz" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.481256 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62fnp\" (UniqueName: \"kubernetes.io/projected/ca58b1fb-629d-412a-9b10-11a58e9a82ab-kube-api-access-62fnp\") pod \"cert-manager-webhook-687f57d79b-96ttx\" (UID: \"ca58b1fb-629d-412a-9b10-11a58e9a82ab\") " pod="cert-manager/cert-manager-webhook-687f57d79b-96ttx" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.582756 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6zz7\" (UniqueName: \"kubernetes.io/projected/a2720484-f07d-45fe-8acd-54191c11123f-kube-api-access-l6zz7\") pod \"cert-manager-858654f9db-6frw2\" (UID: \"a2720484-f07d-45fe-8acd-54191c11123f\") " pod="cert-manager/cert-manager-858654f9db-6frw2" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.582862 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wc5m\" (UniqueName: \"kubernetes.io/projected/2a7bd38b-fde5-4a38-bae8-c72a44172d4e-kube-api-access-2wc5m\") pod \"cert-manager-cainjector-cf98fcc89-8vpjz\" (UID: \"2a7bd38b-fde5-4a38-bae8-c72a44172d4e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8vpjz" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.582883 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62fnp\" (UniqueName: \"kubernetes.io/projected/ca58b1fb-629d-412a-9b10-11a58e9a82ab-kube-api-access-62fnp\") pod \"cert-manager-webhook-687f57d79b-96ttx\" (UID: \"ca58b1fb-629d-412a-9b10-11a58e9a82ab\") " pod="cert-manager/cert-manager-webhook-687f57d79b-96ttx" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.603939 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6zz7\" (UniqueName: \"kubernetes.io/projected/a2720484-f07d-45fe-8acd-54191c11123f-kube-api-access-l6zz7\") pod \"cert-manager-858654f9db-6frw2\" (UID: \"a2720484-f07d-45fe-8acd-54191c11123f\") " pod="cert-manager/cert-manager-858654f9db-6frw2" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.604105 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62fnp\" (UniqueName: \"kubernetes.io/projected/ca58b1fb-629d-412a-9b10-11a58e9a82ab-kube-api-access-62fnp\") pod \"cert-manager-webhook-687f57d79b-96ttx\" (UID: \"ca58b1fb-629d-412a-9b10-11a58e9a82ab\") " pod="cert-manager/cert-manager-webhook-687f57d79b-96ttx" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.605846 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wc5m\" (UniqueName: \"kubernetes.io/projected/2a7bd38b-fde5-4a38-bae8-c72a44172d4e-kube-api-access-2wc5m\") pod \"cert-manager-cainjector-cf98fcc89-8vpjz\" (UID: \"2a7bd38b-fde5-4a38-bae8-c72a44172d4e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-8vpjz" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.729398 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8vpjz" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.741610 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-6frw2" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.770095 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-96ttx" Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.954336 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-8vpjz"] Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.962948 4937 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 06:44:35 crc kubenswrapper[4937]: I0123 06:44:35.999441 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-6frw2"] Jan 23 06:44:36 crc kubenswrapper[4937]: W0123 06:44:36.000424 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2720484_f07d_45fe_8acd_54191c11123f.slice/crio-296f2cde4e37c56b67fe50bbc9b0c62211ba7d3dba419c167ccee7c7c618724b WatchSource:0}: Error finding container 296f2cde4e37c56b67fe50bbc9b0c62211ba7d3dba419c167ccee7c7c618724b: Status 404 returned error can't find the container with id 296f2cde4e37c56b67fe50bbc9b0c62211ba7d3dba419c167ccee7c7c618724b Jan 23 06:44:36 crc kubenswrapper[4937]: I0123 06:44:36.042155 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-96ttx"] Jan 23 06:44:36 crc kubenswrapper[4937]: W0123 06:44:36.043293 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca58b1fb_629d_412a_9b10_11a58e9a82ab.slice/crio-ecf59aeb45680a0cf3467f3fa78147846ab34a85e61c0e11ab398aacd18e979c WatchSource:0}: Error finding container ecf59aeb45680a0cf3467f3fa78147846ab34a85e61c0e11ab398aacd18e979c: Status 404 returned error can't find the container with id ecf59aeb45680a0cf3467f3fa78147846ab34a85e61c0e11ab398aacd18e979c Jan 23 06:44:36 crc kubenswrapper[4937]: I0123 06:44:36.616566 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-6frw2" event={"ID":"a2720484-f07d-45fe-8acd-54191c11123f","Type":"ContainerStarted","Data":"296f2cde4e37c56b67fe50bbc9b0c62211ba7d3dba419c167ccee7c7c618724b"} Jan 23 06:44:36 crc kubenswrapper[4937]: I0123 06:44:36.618649 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8vpjz" event={"ID":"2a7bd38b-fde5-4a38-bae8-c72a44172d4e","Type":"ContainerStarted","Data":"bb0348dcf1d4c2e00979170c02bedc56f8650fc81fc5fe8f7d26b11e34dcc35d"} Jan 23 06:44:36 crc kubenswrapper[4937]: I0123 06:44:36.620537 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-96ttx" event={"ID":"ca58b1fb-629d-412a-9b10-11a58e9a82ab","Type":"ContainerStarted","Data":"ecf59aeb45680a0cf3467f3fa78147846ab34a85e61c0e11ab398aacd18e979c"} Jan 23 06:44:44 crc kubenswrapper[4937]: I0123 06:44:44.792828 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hqgs9"] Jan 23 06:44:44 crc kubenswrapper[4937]: I0123 06:44:44.793802 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="nbdb" containerID="cri-o://44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279" gracePeriod=30 Jan 23 06:44:44 crc kubenswrapper[4937]: I0123 06:44:44.794011 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="sbdb" containerID="cri-o://81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e" gracePeriod=30 Jan 23 06:44:44 crc kubenswrapper[4937]: I0123 06:44:44.794094 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2" gracePeriod=30 Jan 23 06:44:44 crc kubenswrapper[4937]: I0123 06:44:44.794105 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="kube-rbac-proxy-node" containerID="cri-o://92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a" gracePeriod=30 Jan 23 06:44:44 crc kubenswrapper[4937]: I0123 06:44:44.794141 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="northd" containerID="cri-o://d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b" gracePeriod=30 Jan 23 06:44:44 crc kubenswrapper[4937]: I0123 06:44:44.794175 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovn-acl-logging" containerID="cri-o://58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e" gracePeriod=30 Jan 23 06:44:44 crc kubenswrapper[4937]: I0123 06:44:44.794194 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovn-controller" containerID="cri-o://2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100" gracePeriod=30 Jan 23 06:44:44 crc kubenswrapper[4937]: I0123 06:44:44.848721 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovnkube-controller" containerID="cri-o://5853272001fb9ba14897e9ac001b2ecb67428fb7e562c2303a245dacb8133b9f" gracePeriod=30 Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.683706 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovnkube-controller/3.log" Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.687920 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovn-acl-logging/0.log" Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.689087 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovn-controller/0.log" Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.689751 4937 generic.go:334] "Generic (PLEG): container finished" podID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerID="5853272001fb9ba14897e9ac001b2ecb67428fb7e562c2303a245dacb8133b9f" exitCode=0 Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.689794 4937 generic.go:334] "Generic (PLEG): container finished" podID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerID="81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e" exitCode=0 Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.689808 4937 generic.go:334] "Generic (PLEG): container finished" podID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerID="44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279" exitCode=0 Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.689806 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerDied","Data":"5853272001fb9ba14897e9ac001b2ecb67428fb7e562c2303a245dacb8133b9f"} Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.689823 4937 generic.go:334] "Generic (PLEG): container finished" podID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerID="d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b" exitCode=0 Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.689836 4937 generic.go:334] "Generic (PLEG): container finished" podID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerID="881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2" exitCode=0 Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.689848 4937 generic.go:334] "Generic (PLEG): container finished" podID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerID="92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a" exitCode=0 Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.689879 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerDied","Data":"81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e"} Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.689894 4937 generic.go:334] "Generic (PLEG): container finished" podID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerID="58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e" exitCode=143 Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.689902 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerDied","Data":"44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279"} Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.689974 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerDied","Data":"d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b"} Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.689982 4937 scope.go:117] "RemoveContainer" containerID="4481f95583999ccaf663303884063c61c3754eb8f115ba4600b3b468516f4f41" Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.689998 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerDied","Data":"881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2"} Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.690014 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerDied","Data":"92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a"} Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.690027 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerDied","Data":"58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e"} Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.689907 4937 generic.go:334] "Generic (PLEG): container finished" podID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerID="2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100" exitCode=143 Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.690115 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerDied","Data":"2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100"} Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.692924 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhj54_ddcbbc37-6ac2-41e5-a7ea-04de9284c50a/kube-multus/2.log" Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.693675 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhj54_ddcbbc37-6ac2-41e5-a7ea-04de9284c50a/kube-multus/1.log" Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.693748 4937 generic.go:334] "Generic (PLEG): container finished" podID="ddcbbc37-6ac2-41e5-a7ea-04de9284c50a" containerID="a16dcccd6ca28bd3f08d1217478481827c25c3bc2e24679aa2d2ce901794a61b" exitCode=2 Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.693828 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bhj54" event={"ID":"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a","Type":"ContainerDied","Data":"a16dcccd6ca28bd3f08d1217478481827c25c3bc2e24679aa2d2ce901794a61b"} Jan 23 06:44:45 crc kubenswrapper[4937]: I0123 06:44:45.694755 4937 scope.go:117] "RemoveContainer" containerID="a16dcccd6ca28bd3f08d1217478481827c25c3bc2e24679aa2d2ce901794a61b" Jan 23 06:44:45 crc kubenswrapper[4937]: E0123 06:44:45.695204 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-bhj54_openshift-multus(ddcbbc37-6ac2-41e5-a7ea-04de9284c50a)\"" pod="openshift-multus/multus-bhj54" podUID="ddcbbc37-6ac2-41e5-a7ea-04de9284c50a" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.120873 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovn-acl-logging/0.log" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.121496 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovn-controller/0.log" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.122149 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.172654 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-etc-openvswitch\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.172709 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-ovnkube-script-lib\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.172741 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-run-systemd\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.172772 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-cni-bin\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.172791 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-var-lib-openvswitch\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.172808 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-cni-netd\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.172831 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-run-openvswitch\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.172851 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-kubelet\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.172876 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-run-ovn-kubernetes\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.172895 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-slash\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.172931 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-systemd-units\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.172958 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-ovn-node-metrics-cert\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.173001 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-run-netns\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.173037 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v77hj\" (UniqueName: \"kubernetes.io/projected/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-kube-api-access-v77hj\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.173069 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-ovnkube-config\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.173094 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-node-log\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.173119 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-run-ovn\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.173139 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-log-socket\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.173161 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-env-overrides\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.173184 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-var-lib-cni-networks-ovn-kubernetes\") pod \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\" (UID: \"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229\") " Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.173453 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.173491 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.174009 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.174057 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.174489 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.175797 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.176100 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.176197 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.176233 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.176237 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.176275 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.176285 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-node-log" (OuterVolumeSpecName: "node-log") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.176306 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.176319 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.176338 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-slash" (OuterVolumeSpecName: "host-slash") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.176350 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-log-socket" (OuterVolumeSpecName: "log-socket") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.176894 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.192604 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ptkcr"] Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.192721 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-kube-api-access-v77hj" (OuterVolumeSpecName: "kube-api-access-v77hj") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "kube-api-access-v77hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.192848 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: E0123 06:44:46.192868 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovn-acl-logging" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.192942 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovn-acl-logging" Jan 23 06:44:46 crc kubenswrapper[4937]: E0123 06:44:46.192990 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="nbdb" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193004 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="nbdb" Jan 23 06:44:46 crc kubenswrapper[4937]: E0123 06:44:46.193024 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193036 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 06:44:46 crc kubenswrapper[4937]: E0123 06:44:46.193051 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="kube-rbac-proxy-node" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193064 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="kube-rbac-proxy-node" Jan 23 06:44:46 crc kubenswrapper[4937]: E0123 06:44:46.193084 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="kubecfg-setup" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193097 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="kubecfg-setup" Jan 23 06:44:46 crc kubenswrapper[4937]: E0123 06:44:46.193116 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovn-controller" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193127 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovn-controller" Jan 23 06:44:46 crc kubenswrapper[4937]: E0123 06:44:46.193145 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovnkube-controller" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193156 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovnkube-controller" Jan 23 06:44:46 crc kubenswrapper[4937]: E0123 06:44:46.193171 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovnkube-controller" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193183 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovnkube-controller" Jan 23 06:44:46 crc kubenswrapper[4937]: E0123 06:44:46.193199 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovnkube-controller" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193211 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovnkube-controller" Jan 23 06:44:46 crc kubenswrapper[4937]: E0123 06:44:46.193231 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovnkube-controller" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193242 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovnkube-controller" Jan 23 06:44:46 crc kubenswrapper[4937]: E0123 06:44:46.193260 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="northd" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193270 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="northd" Jan 23 06:44:46 crc kubenswrapper[4937]: E0123 06:44:46.193286 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="sbdb" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193298 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="sbdb" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193546 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovnkube-controller" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193566 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovnkube-controller" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193579 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovn-controller" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193616 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovnkube-controller" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193632 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193646 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovnkube-controller" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193664 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="nbdb" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193675 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovnkube-controller" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193691 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovn-acl-logging" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193707 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="northd" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193718 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="kube-rbac-proxy-node" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193733 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="sbdb" Jan 23 06:44:46 crc kubenswrapper[4937]: E0123 06:44:46.193894 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovnkube-controller" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.193907 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" containerName="ovnkube-controller" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.195997 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" (UID: "8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.196881 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.274241 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-run-systemd\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.274295 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-slash\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.274319 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-kubelet\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.274357 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-var-lib-openvswitch\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.274386 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-node-log\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.274409 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-cni-netd\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.274438 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb70bd33-5b00-4eec-8708-b55d500747d4-env-overrides\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.274458 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb70bd33-5b00-4eec-8708-b55d500747d4-ovn-node-metrics-cert\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.274703 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hghvj\" (UniqueName: \"kubernetes.io/projected/cb70bd33-5b00-4eec-8708-b55d500747d4-kube-api-access-hghvj\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.274815 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb70bd33-5b00-4eec-8708-b55d500747d4-ovnkube-config\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.274847 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-log-socket\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.274882 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-systemd-units\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.274909 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-run-netns\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.274939 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-etc-openvswitch\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.274960 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-run-openvswitch\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.274988 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-run-ovn\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.275019 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb70bd33-5b00-4eec-8708-b55d500747d4-ovnkube-script-lib\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.275631 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-cni-bin\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.275670 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.275711 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-run-ovn-kubernetes\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.275808 4937 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.275836 4937 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.275849 4937 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.275861 4937 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.275873 4937 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.275887 4937 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.275899 4937 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.275910 4937 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.275923 4937 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.275940 4937 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-slash\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.275952 4937 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.275964 4937 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.275975 4937 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.275990 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v77hj\" (UniqueName: \"kubernetes.io/projected/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-kube-api-access-v77hj\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.276002 4937 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.276015 4937 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-node-log\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.276029 4937 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.276041 4937 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-log-socket\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.276053 4937 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.276070 4937 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377362 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377423 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-run-ovn-kubernetes\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377455 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-run-systemd\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377478 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-slash\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377482 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-run-systemd\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377502 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-kubelet\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377450 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377528 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-slash\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377520 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-run-ovn-kubernetes\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377566 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-var-lib-openvswitch\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377534 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-var-lib-openvswitch\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377624 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-kubelet\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377673 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-node-log\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377718 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-cni-netd\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377748 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb70bd33-5b00-4eec-8708-b55d500747d4-env-overrides\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377762 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb70bd33-5b00-4eec-8708-b55d500747d4-ovn-node-metrics-cert\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377786 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hghvj\" (UniqueName: \"kubernetes.io/projected/cb70bd33-5b00-4eec-8708-b55d500747d4-kube-api-access-hghvj\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377800 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb70bd33-5b00-4eec-8708-b55d500747d4-ovnkube-config\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377818 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-cni-netd\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377853 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-log-socket\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377833 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-log-socket\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377927 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-systemd-units\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.377973 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-run-netns\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.378024 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-etc-openvswitch\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.378068 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-run-openvswitch\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.378109 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-run-ovn\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.378201 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb70bd33-5b00-4eec-8708-b55d500747d4-ovnkube-script-lib\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.378311 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-run-netns\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.378367 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-node-log\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.378419 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-systemd-units\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.378726 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-cni-bin\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.378621 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-etc-openvswitch\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.378637 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-run-ovn\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.378568 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cb70bd33-5b00-4eec-8708-b55d500747d4-ovnkube-config\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.378691 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-host-cni-bin\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.378583 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cb70bd33-5b00-4eec-8708-b55d500747d4-run-openvswitch\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.378783 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cb70bd33-5b00-4eec-8708-b55d500747d4-env-overrides\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.379072 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cb70bd33-5b00-4eec-8708-b55d500747d4-ovnkube-script-lib\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.391245 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cb70bd33-5b00-4eec-8708-b55d500747d4-ovn-node-metrics-cert\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.406386 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hghvj\" (UniqueName: \"kubernetes.io/projected/cb70bd33-5b00-4eec-8708-b55d500747d4-kube-api-access-hghvj\") pod \"ovnkube-node-ptkcr\" (UID: \"cb70bd33-5b00-4eec-8708-b55d500747d4\") " pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.481473 4937 scope.go:117] "RemoveContainer" containerID="46a429144d0552298786ee2b19d9340a29a8558aefcf75a274c2c3e3bbab20ed" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.514824 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:46 crc kubenswrapper[4937]: W0123 06:44:46.555875 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb70bd33_5b00_4eec_8708_b55d500747d4.slice/crio-ec5743db39b9c358846bb02d241a02fc5b7954fe35bc6c54521cf530a36330a1 WatchSource:0}: Error finding container ec5743db39b9c358846bb02d241a02fc5b7954fe35bc6c54521cf530a36330a1: Status 404 returned error can't find the container with id ec5743db39b9c358846bb02d241a02fc5b7954fe35bc6c54521cf530a36330a1 Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.698784 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" event={"ID":"cb70bd33-5b00-4eec-8708-b55d500747d4","Type":"ContainerStarted","Data":"ec5743db39b9c358846bb02d241a02fc5b7954fe35bc6c54521cf530a36330a1"} Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.701968 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovn-acl-logging/0.log" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.702375 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hqgs9_8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/ovn-controller/0.log" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.702963 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" event={"ID":"8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229","Type":"ContainerDied","Data":"d5ee99735aa0583e200520ef0ecacb41803a0693581ea665038547c23f94aafc"} Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.702992 4937 scope.go:117] "RemoveContainer" containerID="5853272001fb9ba14897e9ac001b2ecb67428fb7e562c2303a245dacb8133b9f" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.703117 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hqgs9" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.707061 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhj54_ddcbbc37-6ac2-41e5-a7ea-04de9284c50a/kube-multus/2.log" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.721325 4937 scope.go:117] "RemoveContainer" containerID="81aa87d3cda7dfd96dcbe228f013621f61b2613a3ca7f0b7f6d57489badfca4e" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.736730 4937 scope.go:117] "RemoveContainer" containerID="44d72fb6a5431cfa82883c3a98a6ac948b9c17b98d9289f3c241da6b0db10279" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.752512 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hqgs9"] Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.758807 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hqgs9"] Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.761620 4937 scope.go:117] "RemoveContainer" containerID="d710410d033782c05b1120fd3e829397545cb7db89f6f43c0f82ff2d1588e64b" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.774797 4937 scope.go:117] "RemoveContainer" containerID="881985dd16a1e14af4101352a619fa03b2c2777750cda9f7d59adc1745c188c2" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.790728 4937 scope.go:117] "RemoveContainer" containerID="92a9af786bde176f78bfbd3a9e793286877f2badbead3ebe6615c880d6053c2a" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.803977 4937 scope.go:117] "RemoveContainer" containerID="58eb2e68bd3805df915f11edd56f4f5a0bfab98cb15a72ff3a8c773e439aa91e" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.814772 4937 scope.go:117] "RemoveContainer" containerID="2d31e6d30097a0ea84e943ef67170c96d59e19ba389aa0c08b92c7a68bd52100" Jan 23 06:44:46 crc kubenswrapper[4937]: I0123 06:44:46.829262 4937 scope.go:117] "RemoveContainer" containerID="78a48c54efc3b4a66aaf25d0ae7434fcebcaaad429ceeb1a4d5f42db8607ab7a" Jan 23 06:44:47 crc kubenswrapper[4937]: I0123 06:44:47.714279 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-96ttx" event={"ID":"ca58b1fb-629d-412a-9b10-11a58e9a82ab","Type":"ContainerStarted","Data":"9fdeac661ef0765c0f17850e4cbb852a344677fa806ed01ce789d0620467f856"} Jan 23 06:44:47 crc kubenswrapper[4937]: I0123 06:44:47.715681 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-96ttx" Jan 23 06:44:47 crc kubenswrapper[4937]: I0123 06:44:47.715751 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-6frw2" event={"ID":"a2720484-f07d-45fe-8acd-54191c11123f","Type":"ContainerStarted","Data":"068303537c9ce9f2123bbe6c7c7b4a1d3f74b65888f21562ce3e8c67ad1ea77b"} Jan 23 06:44:47 crc kubenswrapper[4937]: I0123 06:44:47.716934 4937 generic.go:334] "Generic (PLEG): container finished" podID="cb70bd33-5b00-4eec-8708-b55d500747d4" containerID="6ce316b9656e96f6e8bdadaa837dd0c4a24dd0e65fce28c312c2bff922224664" exitCode=0 Jan 23 06:44:47 crc kubenswrapper[4937]: I0123 06:44:47.716984 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" event={"ID":"cb70bd33-5b00-4eec-8708-b55d500747d4","Type":"ContainerDied","Data":"6ce316b9656e96f6e8bdadaa837dd0c4a24dd0e65fce28c312c2bff922224664"} Jan 23 06:44:47 crc kubenswrapper[4937]: I0123 06:44:47.718361 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8vpjz" event={"ID":"2a7bd38b-fde5-4a38-bae8-c72a44172d4e","Type":"ContainerStarted","Data":"e046bfb3a89e58e55b3c189bb6a0d59f18dbf3843e3c6aa0b130b46cf477f7de"} Jan 23 06:44:47 crc kubenswrapper[4937]: I0123 06:44:47.732738 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-96ttx" podStartSLOduration=2.230940303 podStartE2EDuration="12.732706374s" podCreationTimestamp="2026-01-23 06:44:35 +0000 UTC" firstStartedPulling="2026-01-23 06:44:36.045326007 +0000 UTC m=+675.849092660" lastFinishedPulling="2026-01-23 06:44:46.547092078 +0000 UTC m=+686.350858731" observedRunningTime="2026-01-23 06:44:47.727305088 +0000 UTC m=+687.531071741" watchObservedRunningTime="2026-01-23 06:44:47.732706374 +0000 UTC m=+687.536473067" Jan 23 06:44:47 crc kubenswrapper[4937]: I0123 06:44:47.745361 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-6frw2" podStartSLOduration=2.20242043 podStartE2EDuration="12.745337477s" podCreationTimestamp="2026-01-23 06:44:35 +0000 UTC" firstStartedPulling="2026-01-23 06:44:36.002627569 +0000 UTC m=+675.806394222" lastFinishedPulling="2026-01-23 06:44:46.545544616 +0000 UTC m=+686.349311269" observedRunningTime="2026-01-23 06:44:47.741493192 +0000 UTC m=+687.545259865" watchObservedRunningTime="2026-01-23 06:44:47.745337477 +0000 UTC m=+687.549104130" Jan 23 06:44:47 crc kubenswrapper[4937]: I0123 06:44:47.791169 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-8vpjz" podStartSLOduration=2.215651327 podStartE2EDuration="12.791151968s" podCreationTimestamp="2026-01-23 06:44:35 +0000 UTC" firstStartedPulling="2026-01-23 06:44:35.962642565 +0000 UTC m=+675.766409218" lastFinishedPulling="2026-01-23 06:44:46.538143206 +0000 UTC m=+686.341909859" observedRunningTime="2026-01-23 06:44:47.790901651 +0000 UTC m=+687.594668304" watchObservedRunningTime="2026-01-23 06:44:47.791151968 +0000 UTC m=+687.594918621" Jan 23 06:44:48 crc kubenswrapper[4937]: I0123 06:44:48.549566 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229" path="/var/lib/kubelet/pods/8fd7ddb4-f9d3-45e1-9e40-3bc91b81a229/volumes" Jan 23 06:44:48 crc kubenswrapper[4937]: I0123 06:44:48.727762 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" event={"ID":"cb70bd33-5b00-4eec-8708-b55d500747d4","Type":"ContainerStarted","Data":"1d0c4ebe59ae5a0cf7d40e334a568dcb3bf44d9599232fc1ce63af3e2c85d83d"} Jan 23 06:44:48 crc kubenswrapper[4937]: I0123 06:44:48.727844 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" event={"ID":"cb70bd33-5b00-4eec-8708-b55d500747d4","Type":"ContainerStarted","Data":"00ddc34ab8207a56275c345287c2a1667415c33dac4790575345626420f35597"} Jan 23 06:44:48 crc kubenswrapper[4937]: I0123 06:44:48.727861 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" event={"ID":"cb70bd33-5b00-4eec-8708-b55d500747d4","Type":"ContainerStarted","Data":"eced658c43810b02b9c55a1315e9a8ee6e116303783b4b853cc8b45d91d1217f"} Jan 23 06:44:49 crc kubenswrapper[4937]: I0123 06:44:49.737488 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" event={"ID":"cb70bd33-5b00-4eec-8708-b55d500747d4","Type":"ContainerStarted","Data":"fd908c3e7821048f80ae0660f43222e10a66b6cda9a91dd73079b3a991e55f61"} Jan 23 06:44:49 crc kubenswrapper[4937]: I0123 06:44:49.737899 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" event={"ID":"cb70bd33-5b00-4eec-8708-b55d500747d4","Type":"ContainerStarted","Data":"86d8866bac2304ecc0ba3aabfe8d956ea661f15a9d823bffae0e5cd7bc37ae5f"} Jan 23 06:44:49 crc kubenswrapper[4937]: I0123 06:44:49.737919 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" event={"ID":"cb70bd33-5b00-4eec-8708-b55d500747d4","Type":"ContainerStarted","Data":"d650043b80580bc5b8e85414474b0fe57d46e4769a37af9d183a65cdce89585f"} Jan 23 06:44:51 crc kubenswrapper[4937]: I0123 06:44:51.753069 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" event={"ID":"cb70bd33-5b00-4eec-8708-b55d500747d4","Type":"ContainerStarted","Data":"e1478cb0c76897a9d448edf40952ad4809942a458ecc76b0f0c0324453d8d34f"} Jan 23 06:44:54 crc kubenswrapper[4937]: I0123 06:44:54.774953 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" event={"ID":"cb70bd33-5b00-4eec-8708-b55d500747d4","Type":"ContainerStarted","Data":"e027dce69d6765e0231b14f5a052e4ebffd837b220beeaf93b35b16d42087808"} Jan 23 06:44:54 crc kubenswrapper[4937]: I0123 06:44:54.775276 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:54 crc kubenswrapper[4937]: I0123 06:44:54.775288 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:54 crc kubenswrapper[4937]: I0123 06:44:54.775296 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:54 crc kubenswrapper[4937]: I0123 06:44:54.812522 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:54 crc kubenswrapper[4937]: I0123 06:44:54.821142 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:44:54 crc kubenswrapper[4937]: I0123 06:44:54.857609 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" podStartSLOduration=8.857579422 podStartE2EDuration="8.857579422s" podCreationTimestamp="2026-01-23 06:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:44:54.821564607 +0000 UTC m=+694.625331260" watchObservedRunningTime="2026-01-23 06:44:54.857579422 +0000 UTC m=+694.661346075" Jan 23 06:44:55 crc kubenswrapper[4937]: I0123 06:44:55.773727 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-96ttx" Jan 23 06:44:57 crc kubenswrapper[4937]: I0123 06:44:57.784700 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" podUID="aaaebcc7-b79a-4068-be05-1e4c1808e6b4" containerName="registry" containerID="cri-o://035518516f143966f79e1be95c74a33050d4432efbb10202a8151302f51bb96e" gracePeriod=30 Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.013920 4937 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-lsjwm container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" start-of-body= Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.014017 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" podUID="aaaebcc7-b79a-4068-be05-1e4c1808e6b4" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.663309 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.758881 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-bound-sa-token\") pod \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.759410 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-ca-trust-extracted\") pod \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.759466 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-registry-tls\") pod \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.759501 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-trusted-ca\") pod \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.759537 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-installation-pull-secrets\") pod \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.759582 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pqzp\" (UniqueName: \"kubernetes.io/projected/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-kube-api-access-7pqzp\") pod \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.761354 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "aaaebcc7-b79a-4068-be05-1e4c1808e6b4" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.765742 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.765857 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-registry-certificates\") pod \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\" (UID: \"aaaebcc7-b79a-4068-be05-1e4c1808e6b4\") " Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.766397 4937 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.766938 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-kube-api-access-7pqzp" (OuterVolumeSpecName: "kube-api-access-7pqzp") pod "aaaebcc7-b79a-4068-be05-1e4c1808e6b4" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4"). InnerVolumeSpecName "kube-api-access-7pqzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.767752 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "aaaebcc7-b79a-4068-be05-1e4c1808e6b4" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.767886 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "aaaebcc7-b79a-4068-be05-1e4c1808e6b4" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.767830 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "aaaebcc7-b79a-4068-be05-1e4c1808e6b4" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.768195 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "aaaebcc7-b79a-4068-be05-1e4c1808e6b4" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.785158 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "aaaebcc7-b79a-4068-be05-1e4c1808e6b4" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.789669 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "aaaebcc7-b79a-4068-be05-1e4c1808e6b4" (UID: "aaaebcc7-b79a-4068-be05-1e4c1808e6b4"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.799488 4937 generic.go:334] "Generic (PLEG): container finished" podID="aaaebcc7-b79a-4068-be05-1e4c1808e6b4" containerID="035518516f143966f79e1be95c74a33050d4432efbb10202a8151302f51bb96e" exitCode=0 Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.799543 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" event={"ID":"aaaebcc7-b79a-4068-be05-1e4c1808e6b4","Type":"ContainerDied","Data":"035518516f143966f79e1be95c74a33050d4432efbb10202a8151302f51bb96e"} Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.799573 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" event={"ID":"aaaebcc7-b79a-4068-be05-1e4c1808e6b4","Type":"ContainerDied","Data":"7d3b4a99b92ec9ac8e011e5f11115e85ff0b395578e0b2a3fdb2b2938bd444ae"} Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.799614 4937 scope.go:117] "RemoveContainer" containerID="035518516f143966f79e1be95c74a33050d4432efbb10202a8151302f51bb96e" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.799736 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lsjwm" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.867346 4937 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.867393 4937 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.867412 4937 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.867429 4937 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.867446 4937 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.867462 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pqzp\" (UniqueName: \"kubernetes.io/projected/aaaebcc7-b79a-4068-be05-1e4c1808e6b4-kube-api-access-7pqzp\") on node \"crc\" DevicePath \"\"" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.874429 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lsjwm"] Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.875277 4937 scope.go:117] "RemoveContainer" containerID="035518516f143966f79e1be95c74a33050d4432efbb10202a8151302f51bb96e" Jan 23 06:44:58 crc kubenswrapper[4937]: E0123 06:44:58.875811 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"035518516f143966f79e1be95c74a33050d4432efbb10202a8151302f51bb96e\": container with ID starting with 035518516f143966f79e1be95c74a33050d4432efbb10202a8151302f51bb96e not found: ID does not exist" containerID="035518516f143966f79e1be95c74a33050d4432efbb10202a8151302f51bb96e" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.875845 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"035518516f143966f79e1be95c74a33050d4432efbb10202a8151302f51bb96e"} err="failed to get container status \"035518516f143966f79e1be95c74a33050d4432efbb10202a8151302f51bb96e\": rpc error: code = NotFound desc = could not find container \"035518516f143966f79e1be95c74a33050d4432efbb10202a8151302f51bb96e\": container with ID starting with 035518516f143966f79e1be95c74a33050d4432efbb10202a8151302f51bb96e not found: ID does not exist" Jan 23 06:44:58 crc kubenswrapper[4937]: I0123 06:44:58.886835 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lsjwm"] Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.180657 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf"] Jan 23 06:45:00 crc kubenswrapper[4937]: E0123 06:45:00.181351 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaaebcc7-b79a-4068-be05-1e4c1808e6b4" containerName="registry" Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.181372 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaaebcc7-b79a-4068-be05-1e4c1808e6b4" containerName="registry" Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.181524 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaaebcc7-b79a-4068-be05-1e4c1808e6b4" containerName="registry" Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.182167 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.184767 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.184837 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.191062 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf"] Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.285698 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trr4p\" (UniqueName: \"kubernetes.io/projected/d742ae3a-78f8-4ba5-9722-d385565718e3-kube-api-access-trr4p\") pod \"collect-profiles-29485845-tf8tf\" (UID: \"d742ae3a-78f8-4ba5-9722-d385565718e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.285752 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d742ae3a-78f8-4ba5-9722-d385565718e3-secret-volume\") pod \"collect-profiles-29485845-tf8tf\" (UID: \"d742ae3a-78f8-4ba5-9722-d385565718e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.285795 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d742ae3a-78f8-4ba5-9722-d385565718e3-config-volume\") pod \"collect-profiles-29485845-tf8tf\" (UID: \"d742ae3a-78f8-4ba5-9722-d385565718e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.387146 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d742ae3a-78f8-4ba5-9722-d385565718e3-config-volume\") pod \"collect-profiles-29485845-tf8tf\" (UID: \"d742ae3a-78f8-4ba5-9722-d385565718e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.387270 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trr4p\" (UniqueName: \"kubernetes.io/projected/d742ae3a-78f8-4ba5-9722-d385565718e3-kube-api-access-trr4p\") pod \"collect-profiles-29485845-tf8tf\" (UID: \"d742ae3a-78f8-4ba5-9722-d385565718e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.387302 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d742ae3a-78f8-4ba5-9722-d385565718e3-secret-volume\") pod \"collect-profiles-29485845-tf8tf\" (UID: \"d742ae3a-78f8-4ba5-9722-d385565718e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.388383 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d742ae3a-78f8-4ba5-9722-d385565718e3-config-volume\") pod \"collect-profiles-29485845-tf8tf\" (UID: \"d742ae3a-78f8-4ba5-9722-d385565718e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.393765 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d742ae3a-78f8-4ba5-9722-d385565718e3-secret-volume\") pod \"collect-profiles-29485845-tf8tf\" (UID: \"d742ae3a-78f8-4ba5-9722-d385565718e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.407925 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trr4p\" (UniqueName: \"kubernetes.io/projected/d742ae3a-78f8-4ba5-9722-d385565718e3-kube-api-access-trr4p\") pod \"collect-profiles-29485845-tf8tf\" (UID: \"d742ae3a-78f8-4ba5-9722-d385565718e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.526107 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.532255 4937 scope.go:117] "RemoveContainer" containerID="a16dcccd6ca28bd3f08d1217478481827c25c3bc2e24679aa2d2ce901794a61b" Jan 23 06:45:00 crc kubenswrapper[4937]: E0123 06:45:00.532656 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-bhj54_openshift-multus(ddcbbc37-6ac2-41e5-a7ea-04de9284c50a)\"" pod="openshift-multus/multus-bhj54" podUID="ddcbbc37-6ac2-41e5-a7ea-04de9284c50a" Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.552466 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaaebcc7-b79a-4068-be05-1e4c1808e6b4" path="/var/lib/kubelet/pods/aaaebcc7-b79a-4068-be05-1e4c1808e6b4/volumes" Jan 23 06:45:00 crc kubenswrapper[4937]: E0123 06:45:00.572670 4937 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29485845-tf8tf_openshift-operator-lifecycle-manager_d742ae3a-78f8-4ba5-9722-d385565718e3_0(c3c2c7ab5f3a5b6cde1b86c2f8a3c5ae4c88fe44612c45535082212b53bdcce7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 06:45:00 crc kubenswrapper[4937]: E0123 06:45:00.572830 4937 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29485845-tf8tf_openshift-operator-lifecycle-manager_d742ae3a-78f8-4ba5-9722-d385565718e3_0(c3c2c7ab5f3a5b6cde1b86c2f8a3c5ae4c88fe44612c45535082212b53bdcce7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:00 crc kubenswrapper[4937]: E0123 06:45:00.572879 4937 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29485845-tf8tf_openshift-operator-lifecycle-manager_d742ae3a-78f8-4ba5-9722-d385565718e3_0(c3c2c7ab5f3a5b6cde1b86c2f8a3c5ae4c88fe44612c45535082212b53bdcce7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:00 crc kubenswrapper[4937]: E0123 06:45:00.572973 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29485845-tf8tf_openshift-operator-lifecycle-manager(d742ae3a-78f8-4ba5-9722-d385565718e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29485845-tf8tf_openshift-operator-lifecycle-manager(d742ae3a-78f8-4ba5-9722-d385565718e3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29485845-tf8tf_openshift-operator-lifecycle-manager_d742ae3a-78f8-4ba5-9722-d385565718e3_0(c3c2c7ab5f3a5b6cde1b86c2f8a3c5ae4c88fe44612c45535082212b53bdcce7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" podUID="d742ae3a-78f8-4ba5-9722-d385565718e3" Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.813489 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:00 crc kubenswrapper[4937]: I0123 06:45:00.814280 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:00 crc kubenswrapper[4937]: E0123 06:45:00.854423 4937 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29485845-tf8tf_openshift-operator-lifecycle-manager_d742ae3a-78f8-4ba5-9722-d385565718e3_0(5a138660032a88d0e9399fdb8147bca46b9d91651400ef8e484bc7d1a5a2fad4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 06:45:00 crc kubenswrapper[4937]: E0123 06:45:00.854496 4937 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29485845-tf8tf_openshift-operator-lifecycle-manager_d742ae3a-78f8-4ba5-9722-d385565718e3_0(5a138660032a88d0e9399fdb8147bca46b9d91651400ef8e484bc7d1a5a2fad4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:00 crc kubenswrapper[4937]: E0123 06:45:00.854519 4937 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29485845-tf8tf_openshift-operator-lifecycle-manager_d742ae3a-78f8-4ba5-9722-d385565718e3_0(5a138660032a88d0e9399fdb8147bca46b9d91651400ef8e484bc7d1a5a2fad4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:00 crc kubenswrapper[4937]: E0123 06:45:00.854577 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29485845-tf8tf_openshift-operator-lifecycle-manager(d742ae3a-78f8-4ba5-9722-d385565718e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29485845-tf8tf_openshift-operator-lifecycle-manager(d742ae3a-78f8-4ba5-9722-d385565718e3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29485845-tf8tf_openshift-operator-lifecycle-manager_d742ae3a-78f8-4ba5-9722-d385565718e3_0(5a138660032a88d0e9399fdb8147bca46b9d91651400ef8e484bc7d1a5a2fad4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" podUID="d742ae3a-78f8-4ba5-9722-d385565718e3" Jan 23 06:45:12 crc kubenswrapper[4937]: I0123 06:45:12.527305 4937 scope.go:117] "RemoveContainer" containerID="a16dcccd6ca28bd3f08d1217478481827c25c3bc2e24679aa2d2ce901794a61b" Jan 23 06:45:12 crc kubenswrapper[4937]: I0123 06:45:12.894392 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bhj54_ddcbbc37-6ac2-41e5-a7ea-04de9284c50a/kube-multus/2.log" Jan 23 06:45:12 crc kubenswrapper[4937]: I0123 06:45:12.894481 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bhj54" event={"ID":"ddcbbc37-6ac2-41e5-a7ea-04de9284c50a","Type":"ContainerStarted","Data":"98be0baa837f0b96446bf79788f14ab1e5e1ce6e569b3510781f790f1d36d0bf"} Jan 23 06:45:14 crc kubenswrapper[4937]: I0123 06:45:14.526510 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:14 crc kubenswrapper[4937]: I0123 06:45:14.527693 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:14 crc kubenswrapper[4937]: I0123 06:45:14.822753 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf"] Jan 23 06:45:14 crc kubenswrapper[4937]: W0123 06:45:14.825826 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd742ae3a_78f8_4ba5_9722_d385565718e3.slice/crio-77be9e03efd9a650c8e2a3768e2d7cc113aeb64750c2f7ed41708713e9d80b6e WatchSource:0}: Error finding container 77be9e03efd9a650c8e2a3768e2d7cc113aeb64750c2f7ed41708713e9d80b6e: Status 404 returned error can't find the container with id 77be9e03efd9a650c8e2a3768e2d7cc113aeb64750c2f7ed41708713e9d80b6e Jan 23 06:45:14 crc kubenswrapper[4937]: I0123 06:45:14.909560 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" event={"ID":"d742ae3a-78f8-4ba5-9722-d385565718e3","Type":"ContainerStarted","Data":"77be9e03efd9a650c8e2a3768e2d7cc113aeb64750c2f7ed41708713e9d80b6e"} Jan 23 06:45:15 crc kubenswrapper[4937]: I0123 06:45:15.917750 4937 generic.go:334] "Generic (PLEG): container finished" podID="d742ae3a-78f8-4ba5-9722-d385565718e3" containerID="864264824ced3c77b9279620a7150395c9a4df6deb72da12ea14c11eb2156891" exitCode=0 Jan 23 06:45:15 crc kubenswrapper[4937]: I0123 06:45:15.918040 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" event={"ID":"d742ae3a-78f8-4ba5-9722-d385565718e3","Type":"ContainerDied","Data":"864264824ced3c77b9279620a7150395c9a4df6deb72da12ea14c11eb2156891"} Jan 23 06:45:16 crc kubenswrapper[4937]: I0123 06:45:16.553242 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ptkcr" Jan 23 06:45:17 crc kubenswrapper[4937]: I0123 06:45:17.221921 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:17 crc kubenswrapper[4937]: I0123 06:45:17.386044 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trr4p\" (UniqueName: \"kubernetes.io/projected/d742ae3a-78f8-4ba5-9722-d385565718e3-kube-api-access-trr4p\") pod \"d742ae3a-78f8-4ba5-9722-d385565718e3\" (UID: \"d742ae3a-78f8-4ba5-9722-d385565718e3\") " Jan 23 06:45:17 crc kubenswrapper[4937]: I0123 06:45:17.386167 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d742ae3a-78f8-4ba5-9722-d385565718e3-secret-volume\") pod \"d742ae3a-78f8-4ba5-9722-d385565718e3\" (UID: \"d742ae3a-78f8-4ba5-9722-d385565718e3\") " Jan 23 06:45:17 crc kubenswrapper[4937]: I0123 06:45:17.386210 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d742ae3a-78f8-4ba5-9722-d385565718e3-config-volume\") pod \"d742ae3a-78f8-4ba5-9722-d385565718e3\" (UID: \"d742ae3a-78f8-4ba5-9722-d385565718e3\") " Jan 23 06:45:17 crc kubenswrapper[4937]: I0123 06:45:17.387454 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d742ae3a-78f8-4ba5-9722-d385565718e3-config-volume" (OuterVolumeSpecName: "config-volume") pod "d742ae3a-78f8-4ba5-9722-d385565718e3" (UID: "d742ae3a-78f8-4ba5-9722-d385565718e3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:45:17 crc kubenswrapper[4937]: I0123 06:45:17.395642 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d742ae3a-78f8-4ba5-9722-d385565718e3-kube-api-access-trr4p" (OuterVolumeSpecName: "kube-api-access-trr4p") pod "d742ae3a-78f8-4ba5-9722-d385565718e3" (UID: "d742ae3a-78f8-4ba5-9722-d385565718e3"). InnerVolumeSpecName "kube-api-access-trr4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:45:17 crc kubenswrapper[4937]: I0123 06:45:17.395680 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d742ae3a-78f8-4ba5-9722-d385565718e3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d742ae3a-78f8-4ba5-9722-d385565718e3" (UID: "d742ae3a-78f8-4ba5-9722-d385565718e3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:45:17 crc kubenswrapper[4937]: I0123 06:45:17.488528 4937 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d742ae3a-78f8-4ba5-9722-d385565718e3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:17 crc kubenswrapper[4937]: I0123 06:45:17.488598 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trr4p\" (UniqueName: \"kubernetes.io/projected/d742ae3a-78f8-4ba5-9722-d385565718e3-kube-api-access-trr4p\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:17 crc kubenswrapper[4937]: I0123 06:45:17.488657 4937 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d742ae3a-78f8-4ba5-9722-d385565718e3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:17 crc kubenswrapper[4937]: I0123 06:45:17.934163 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" event={"ID":"d742ae3a-78f8-4ba5-9722-d385565718e3","Type":"ContainerDied","Data":"77be9e03efd9a650c8e2a3768e2d7cc113aeb64750c2f7ed41708713e9d80b6e"} Jan 23 06:45:17 crc kubenswrapper[4937]: I0123 06:45:17.934214 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77be9e03efd9a650c8e2a3768e2d7cc113aeb64750c2f7ed41708713e9d80b6e" Jan 23 06:45:17 crc kubenswrapper[4937]: I0123 06:45:17.934244 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf" Jan 23 06:45:26 crc kubenswrapper[4937]: I0123 06:45:26.400161 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv"] Jan 23 06:45:26 crc kubenswrapper[4937]: E0123 06:45:26.403482 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d742ae3a-78f8-4ba5-9722-d385565718e3" containerName="collect-profiles" Jan 23 06:45:26 crc kubenswrapper[4937]: I0123 06:45:26.403524 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d742ae3a-78f8-4ba5-9722-d385565718e3" containerName="collect-profiles" Jan 23 06:45:26 crc kubenswrapper[4937]: I0123 06:45:26.403789 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="d742ae3a-78f8-4ba5-9722-d385565718e3" containerName="collect-profiles" Jan 23 06:45:26 crc kubenswrapper[4937]: I0123 06:45:26.405383 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv" Jan 23 06:45:26 crc kubenswrapper[4937]: I0123 06:45:26.408167 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 06:45:26 crc kubenswrapper[4937]: I0123 06:45:26.409147 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv"] Jan 23 06:45:26 crc kubenswrapper[4937]: I0123 06:45:26.514931 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ab594b7-c995-4eec-b45a-9433a0300440-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv\" (UID: \"9ab594b7-c995-4eec-b45a-9433a0300440\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv" Jan 23 06:45:26 crc kubenswrapper[4937]: I0123 06:45:26.515026 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ab594b7-c995-4eec-b45a-9433a0300440-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv\" (UID: \"9ab594b7-c995-4eec-b45a-9433a0300440\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv" Jan 23 06:45:26 crc kubenswrapper[4937]: I0123 06:45:26.515151 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwrr6\" (UniqueName: \"kubernetes.io/projected/9ab594b7-c995-4eec-b45a-9433a0300440-kube-api-access-cwrr6\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv\" (UID: \"9ab594b7-c995-4eec-b45a-9433a0300440\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv" Jan 23 06:45:26 crc kubenswrapper[4937]: I0123 06:45:26.617293 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwrr6\" (UniqueName: \"kubernetes.io/projected/9ab594b7-c995-4eec-b45a-9433a0300440-kube-api-access-cwrr6\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv\" (UID: \"9ab594b7-c995-4eec-b45a-9433a0300440\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv" Jan 23 06:45:26 crc kubenswrapper[4937]: I0123 06:45:26.617680 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ab594b7-c995-4eec-b45a-9433a0300440-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv\" (UID: \"9ab594b7-c995-4eec-b45a-9433a0300440\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv" Jan 23 06:45:26 crc kubenswrapper[4937]: I0123 06:45:26.617797 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ab594b7-c995-4eec-b45a-9433a0300440-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv\" (UID: \"9ab594b7-c995-4eec-b45a-9433a0300440\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv" Jan 23 06:45:26 crc kubenswrapper[4937]: I0123 06:45:26.618351 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ab594b7-c995-4eec-b45a-9433a0300440-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv\" (UID: \"9ab594b7-c995-4eec-b45a-9433a0300440\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv" Jan 23 06:45:26 crc kubenswrapper[4937]: I0123 06:45:26.618692 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ab594b7-c995-4eec-b45a-9433a0300440-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv\" (UID: \"9ab594b7-c995-4eec-b45a-9433a0300440\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv" Jan 23 06:45:26 crc kubenswrapper[4937]: I0123 06:45:26.645539 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwrr6\" (UniqueName: \"kubernetes.io/projected/9ab594b7-c995-4eec-b45a-9433a0300440-kube-api-access-cwrr6\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv\" (UID: \"9ab594b7-c995-4eec-b45a-9433a0300440\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv" Jan 23 06:45:26 crc kubenswrapper[4937]: I0123 06:45:26.757707 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv" Jan 23 06:45:27 crc kubenswrapper[4937]: I0123 06:45:27.010275 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv"] Jan 23 06:45:27 crc kubenswrapper[4937]: W0123 06:45:27.015074 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ab594b7_c995_4eec_b45a_9433a0300440.slice/crio-597ead39be05fcc46e6df8738fb2d24659a42b7f3aad329d80dfbdc8aa8a77d4 WatchSource:0}: Error finding container 597ead39be05fcc46e6df8738fb2d24659a42b7f3aad329d80dfbdc8aa8a77d4: Status 404 returned error can't find the container with id 597ead39be05fcc46e6df8738fb2d24659a42b7f3aad329d80dfbdc8aa8a77d4 Jan 23 06:45:27 crc kubenswrapper[4937]: I0123 06:45:27.999466 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv" event={"ID":"9ab594b7-c995-4eec-b45a-9433a0300440","Type":"ContainerStarted","Data":"597ead39be05fcc46e6df8738fb2d24659a42b7f3aad329d80dfbdc8aa8a77d4"} Jan 23 06:45:29 crc kubenswrapper[4937]: I0123 06:45:29.020871 4937 generic.go:334] "Generic (PLEG): container finished" podID="9ab594b7-c995-4eec-b45a-9433a0300440" containerID="85d0e01bf1b51e9fddbcd42eb5ae04281b3a03338afd4fb134dec91c7f339554" exitCode=0 Jan 23 06:45:29 crc kubenswrapper[4937]: I0123 06:45:29.021190 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv" event={"ID":"9ab594b7-c995-4eec-b45a-9433a0300440","Type":"ContainerDied","Data":"85d0e01bf1b51e9fddbcd42eb5ae04281b3a03338afd4fb134dec91c7f339554"} Jan 23 06:45:31 crc kubenswrapper[4937]: I0123 06:45:31.039881 4937 generic.go:334] "Generic (PLEG): container finished" podID="9ab594b7-c995-4eec-b45a-9433a0300440" containerID="0613b218c92c1380446d7ac923028fc1c439a55c4851f085e053a4358adec206" exitCode=0 Jan 23 06:45:31 crc kubenswrapper[4937]: I0123 06:45:31.039961 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv" event={"ID":"9ab594b7-c995-4eec-b45a-9433a0300440","Type":"ContainerDied","Data":"0613b218c92c1380446d7ac923028fc1c439a55c4851f085e053a4358adec206"} Jan 23 06:45:32 crc kubenswrapper[4937]: I0123 06:45:32.053379 4937 generic.go:334] "Generic (PLEG): container finished" podID="9ab594b7-c995-4eec-b45a-9433a0300440" containerID="89ffbe8266a6e47fc9da410e0c540be65c50cbcba6e46b8799cb9865d413c795" exitCode=0 Jan 23 06:45:32 crc kubenswrapper[4937]: I0123 06:45:32.053435 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv" event={"ID":"9ab594b7-c995-4eec-b45a-9433a0300440","Type":"ContainerDied","Data":"89ffbe8266a6e47fc9da410e0c540be65c50cbcba6e46b8799cb9865d413c795"} Jan 23 06:45:33 crc kubenswrapper[4937]: I0123 06:45:33.361738 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv" Jan 23 06:45:33 crc kubenswrapper[4937]: I0123 06:45:33.518349 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwrr6\" (UniqueName: \"kubernetes.io/projected/9ab594b7-c995-4eec-b45a-9433a0300440-kube-api-access-cwrr6\") pod \"9ab594b7-c995-4eec-b45a-9433a0300440\" (UID: \"9ab594b7-c995-4eec-b45a-9433a0300440\") " Jan 23 06:45:33 crc kubenswrapper[4937]: I0123 06:45:33.518463 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ab594b7-c995-4eec-b45a-9433a0300440-bundle\") pod \"9ab594b7-c995-4eec-b45a-9433a0300440\" (UID: \"9ab594b7-c995-4eec-b45a-9433a0300440\") " Jan 23 06:45:33 crc kubenswrapper[4937]: I0123 06:45:33.518581 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ab594b7-c995-4eec-b45a-9433a0300440-util\") pod \"9ab594b7-c995-4eec-b45a-9433a0300440\" (UID: \"9ab594b7-c995-4eec-b45a-9433a0300440\") " Jan 23 06:45:33 crc kubenswrapper[4937]: I0123 06:45:33.522377 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ab594b7-c995-4eec-b45a-9433a0300440-bundle" (OuterVolumeSpecName: "bundle") pod "9ab594b7-c995-4eec-b45a-9433a0300440" (UID: "9ab594b7-c995-4eec-b45a-9433a0300440"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:45:33 crc kubenswrapper[4937]: I0123 06:45:33.535761 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ab594b7-c995-4eec-b45a-9433a0300440-kube-api-access-cwrr6" (OuterVolumeSpecName: "kube-api-access-cwrr6") pod "9ab594b7-c995-4eec-b45a-9433a0300440" (UID: "9ab594b7-c995-4eec-b45a-9433a0300440"). InnerVolumeSpecName "kube-api-access-cwrr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:45:33 crc kubenswrapper[4937]: I0123 06:45:33.550537 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ab594b7-c995-4eec-b45a-9433a0300440-util" (OuterVolumeSpecName: "util") pod "9ab594b7-c995-4eec-b45a-9433a0300440" (UID: "9ab594b7-c995-4eec-b45a-9433a0300440"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:45:33 crc kubenswrapper[4937]: I0123 06:45:33.620471 4937 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ab594b7-c995-4eec-b45a-9433a0300440-util\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:33 crc kubenswrapper[4937]: I0123 06:45:33.620521 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwrr6\" (UniqueName: \"kubernetes.io/projected/9ab594b7-c995-4eec-b45a-9433a0300440-kube-api-access-cwrr6\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:33 crc kubenswrapper[4937]: I0123 06:45:33.620540 4937 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ab594b7-c995-4eec-b45a-9433a0300440-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:45:34 crc kubenswrapper[4937]: I0123 06:45:34.071688 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv" event={"ID":"9ab594b7-c995-4eec-b45a-9433a0300440","Type":"ContainerDied","Data":"597ead39be05fcc46e6df8738fb2d24659a42b7f3aad329d80dfbdc8aa8a77d4"} Jan 23 06:45:34 crc kubenswrapper[4937]: I0123 06:45:34.072103 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="597ead39be05fcc46e6df8738fb2d24659a42b7f3aad329d80dfbdc8aa8a77d4" Jan 23 06:45:34 crc kubenswrapper[4937]: I0123 06:45:34.071879 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.227451 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cwb6t"] Jan 23 06:45:43 crc kubenswrapper[4937]: E0123 06:45:43.228046 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ab594b7-c995-4eec-b45a-9433a0300440" containerName="pull" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.228066 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ab594b7-c995-4eec-b45a-9433a0300440" containerName="pull" Jan 23 06:45:43 crc kubenswrapper[4937]: E0123 06:45:43.228088 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ab594b7-c995-4eec-b45a-9433a0300440" containerName="util" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.228099 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ab594b7-c995-4eec-b45a-9433a0300440" containerName="util" Jan 23 06:45:43 crc kubenswrapper[4937]: E0123 06:45:43.228137 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ab594b7-c995-4eec-b45a-9433a0300440" containerName="extract" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.228151 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ab594b7-c995-4eec-b45a-9433a0300440" containerName="extract" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.228259 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ab594b7-c995-4eec-b45a-9433a0300440" containerName="extract" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.228646 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwb6t" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.231433 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.231500 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-znk6c" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.231553 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.246569 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w59x6\" (UniqueName: \"kubernetes.io/projected/bb87646f-fb1a-4ec8-9d5f-6aeb9fdbb8c5-kube-api-access-w59x6\") pod \"obo-prometheus-operator-68bc856cb9-cwb6t\" (UID: \"bb87646f-fb1a-4ec8-9d5f-6aeb9fdbb8c5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwb6t" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.249908 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cwb6t"] Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.341881 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9"] Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.342570 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.346510 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.346547 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-dgzbf" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.347493 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6b2901ce-8ec3-48a2-956f-bb0dfb4a023f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9\" (UID: \"6b2901ce-8ec3-48a2-956f-bb0dfb4a023f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.347530 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6b2901ce-8ec3-48a2-956f-bb0dfb4a023f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9\" (UID: \"6b2901ce-8ec3-48a2-956f-bb0dfb4a023f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.347565 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w59x6\" (UniqueName: \"kubernetes.io/projected/bb87646f-fb1a-4ec8-9d5f-6aeb9fdbb8c5-kube-api-access-w59x6\") pod \"obo-prometheus-operator-68bc856cb9-cwb6t\" (UID: \"bb87646f-fb1a-4ec8-9d5f-6aeb9fdbb8c5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwb6t" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.352761 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4"] Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.353694 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.356531 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9"] Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.375172 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4"] Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.381061 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w59x6\" (UniqueName: \"kubernetes.io/projected/bb87646f-fb1a-4ec8-9d5f-6aeb9fdbb8c5-kube-api-access-w59x6\") pod \"obo-prometheus-operator-68bc856cb9-cwb6t\" (UID: \"bb87646f-fb1a-4ec8-9d5f-6aeb9fdbb8c5\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwb6t" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.447681 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-htmfn"] Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.448319 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-htmfn" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.448380 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6b2901ce-8ec3-48a2-956f-bb0dfb4a023f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9\" (UID: \"6b2901ce-8ec3-48a2-956f-bb0dfb4a023f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.448427 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6b2901ce-8ec3-48a2-956f-bb0dfb4a023f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9\" (UID: \"6b2901ce-8ec3-48a2-956f-bb0dfb4a023f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.456810 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-brrcg" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.456972 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.456973 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6b2901ce-8ec3-48a2-956f-bb0dfb4a023f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9\" (UID: \"6b2901ce-8ec3-48a2-956f-bb0dfb4a023f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.457205 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6b2901ce-8ec3-48a2-956f-bb0dfb4a023f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9\" (UID: \"6b2901ce-8ec3-48a2-956f-bb0dfb4a023f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.475891 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-htmfn"] Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.545118 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-q8swq"] Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.545324 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwb6t" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.545922 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-q8swq" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.548665 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-hv49h" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.549012 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxhbl\" (UniqueName: \"kubernetes.io/projected/6c905d28-0ee0-4fb3-8ee4-2268d65d9626-kube-api-access-pxhbl\") pod \"observability-operator-59bdc8b94-htmfn\" (UID: \"6c905d28-0ee0-4fb3-8ee4-2268d65d9626\") " pod="openshift-operators/observability-operator-59bdc8b94-htmfn" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.549083 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dcf60bc9-8a7e-493a-a3ee-c7e83e84c59a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4\" (UID: \"dcf60bc9-8a7e-493a-a3ee-c7e83e84c59a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.549108 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c905d28-0ee0-4fb3-8ee4-2268d65d9626-observability-operator-tls\") pod \"observability-operator-59bdc8b94-htmfn\" (UID: \"6c905d28-0ee0-4fb3-8ee4-2268d65d9626\") " pod="openshift-operators/observability-operator-59bdc8b94-htmfn" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.549129 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dcf60bc9-8a7e-493a-a3ee-c7e83e84c59a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4\" (UID: \"dcf60bc9-8a7e-493a-a3ee-c7e83e84c59a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.562835 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-q8swq"] Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.650062 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dcf60bc9-8a7e-493a-a3ee-c7e83e84c59a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4\" (UID: \"dcf60bc9-8a7e-493a-a3ee-c7e83e84c59a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.650106 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c905d28-0ee0-4fb3-8ee4-2268d65d9626-observability-operator-tls\") pod \"observability-operator-59bdc8b94-htmfn\" (UID: \"6c905d28-0ee0-4fb3-8ee4-2268d65d9626\") " pod="openshift-operators/observability-operator-59bdc8b94-htmfn" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.650134 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dcf60bc9-8a7e-493a-a3ee-c7e83e84c59a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4\" (UID: \"dcf60bc9-8a7e-493a-a3ee-c7e83e84c59a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.650173 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxhbl\" (UniqueName: \"kubernetes.io/projected/6c905d28-0ee0-4fb3-8ee4-2268d65d9626-kube-api-access-pxhbl\") pod \"observability-operator-59bdc8b94-htmfn\" (UID: \"6c905d28-0ee0-4fb3-8ee4-2268d65d9626\") " pod="openshift-operators/observability-operator-59bdc8b94-htmfn" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.650200 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr4rv\" (UniqueName: \"kubernetes.io/projected/22215e13-9220-494c-8402-aeb857926025-kube-api-access-qr4rv\") pod \"perses-operator-5bf474d74f-q8swq\" (UID: \"22215e13-9220-494c-8402-aeb857926025\") " pod="openshift-operators/perses-operator-5bf474d74f-q8swq" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.650218 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/22215e13-9220-494c-8402-aeb857926025-openshift-service-ca\") pod \"perses-operator-5bf474d74f-q8swq\" (UID: \"22215e13-9220-494c-8402-aeb857926025\") " pod="openshift-operators/perses-operator-5bf474d74f-q8swq" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.654349 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c905d28-0ee0-4fb3-8ee4-2268d65d9626-observability-operator-tls\") pod \"observability-operator-59bdc8b94-htmfn\" (UID: \"6c905d28-0ee0-4fb3-8ee4-2268d65d9626\") " pod="openshift-operators/observability-operator-59bdc8b94-htmfn" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.656190 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dcf60bc9-8a7e-493a-a3ee-c7e83e84c59a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4\" (UID: \"dcf60bc9-8a7e-493a-a3ee-c7e83e84c59a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.661251 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dcf60bc9-8a7e-493a-a3ee-c7e83e84c59a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4\" (UID: \"dcf60bc9-8a7e-493a-a3ee-c7e83e84c59a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.666892 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxhbl\" (UniqueName: \"kubernetes.io/projected/6c905d28-0ee0-4fb3-8ee4-2268d65d9626-kube-api-access-pxhbl\") pod \"observability-operator-59bdc8b94-htmfn\" (UID: \"6c905d28-0ee0-4fb3-8ee4-2268d65d9626\") " pod="openshift-operators/observability-operator-59bdc8b94-htmfn" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.669513 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.691066 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.744068 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cwb6t"] Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.751818 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr4rv\" (UniqueName: \"kubernetes.io/projected/22215e13-9220-494c-8402-aeb857926025-kube-api-access-qr4rv\") pod \"perses-operator-5bf474d74f-q8swq\" (UID: \"22215e13-9220-494c-8402-aeb857926025\") " pod="openshift-operators/perses-operator-5bf474d74f-q8swq" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.751851 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/22215e13-9220-494c-8402-aeb857926025-openshift-service-ca\") pod \"perses-operator-5bf474d74f-q8swq\" (UID: \"22215e13-9220-494c-8402-aeb857926025\") " pod="openshift-operators/perses-operator-5bf474d74f-q8swq" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.752662 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/22215e13-9220-494c-8402-aeb857926025-openshift-service-ca\") pod \"perses-operator-5bf474d74f-q8swq\" (UID: \"22215e13-9220-494c-8402-aeb857926025\") " pod="openshift-operators/perses-operator-5bf474d74f-q8swq" Jan 23 06:45:43 crc kubenswrapper[4937]: W0123 06:45:43.769031 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb87646f_fb1a_4ec8_9d5f_6aeb9fdbb8c5.slice/crio-bdd74fce3f0cfefe12475d4eee3b6659bf1c2651611daa205f3ce68bca9a8082 WatchSource:0}: Error finding container bdd74fce3f0cfefe12475d4eee3b6659bf1c2651611daa205f3ce68bca9a8082: Status 404 returned error can't find the container with id bdd74fce3f0cfefe12475d4eee3b6659bf1c2651611daa205f3ce68bca9a8082 Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.772111 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr4rv\" (UniqueName: \"kubernetes.io/projected/22215e13-9220-494c-8402-aeb857926025-kube-api-access-qr4rv\") pod \"perses-operator-5bf474d74f-q8swq\" (UID: \"22215e13-9220-494c-8402-aeb857926025\") " pod="openshift-operators/perses-operator-5bf474d74f-q8swq" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.778017 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-htmfn" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.894036 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-q8swq" Jan 23 06:45:43 crc kubenswrapper[4937]: I0123 06:45:43.922072 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9"] Jan 23 06:45:44 crc kubenswrapper[4937]: I0123 06:45:44.107216 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-htmfn"] Jan 23 06:45:44 crc kubenswrapper[4937]: W0123 06:45:44.121778 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c905d28_0ee0_4fb3_8ee4_2268d65d9626.slice/crio-2238d45f07ddc1ef332678c57305261139f1dc10d8aebce1c186cfecca9dbe25 WatchSource:0}: Error finding container 2238d45f07ddc1ef332678c57305261139f1dc10d8aebce1c186cfecca9dbe25: Status 404 returned error can't find the container with id 2238d45f07ddc1ef332678c57305261139f1dc10d8aebce1c186cfecca9dbe25 Jan 23 06:45:44 crc kubenswrapper[4937]: I0123 06:45:44.144613 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9" event={"ID":"6b2901ce-8ec3-48a2-956f-bb0dfb4a023f","Type":"ContainerStarted","Data":"fba11bd0c42b6f9d7a19d580a00d6477106924d42f80b01fb6e5e2bb447cae22"} Jan 23 06:45:44 crc kubenswrapper[4937]: I0123 06:45:44.146543 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwb6t" event={"ID":"bb87646f-fb1a-4ec8-9d5f-6aeb9fdbb8c5","Type":"ContainerStarted","Data":"bdd74fce3f0cfefe12475d4eee3b6659bf1c2651611daa205f3ce68bca9a8082"} Jan 23 06:45:44 crc kubenswrapper[4937]: I0123 06:45:44.149101 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-htmfn" event={"ID":"6c905d28-0ee0-4fb3-8ee4-2268d65d9626","Type":"ContainerStarted","Data":"2238d45f07ddc1ef332678c57305261139f1dc10d8aebce1c186cfecca9dbe25"} Jan 23 06:45:44 crc kubenswrapper[4937]: I0123 06:45:44.167947 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4"] Jan 23 06:45:44 crc kubenswrapper[4937]: W0123 06:45:44.178859 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcf60bc9_8a7e_493a_a3ee_c7e83e84c59a.slice/crio-f6a07d757412f7eeae096bb6878bf0ccac8fa0632d1776c77aed39b9ccf3ad31 WatchSource:0}: Error finding container f6a07d757412f7eeae096bb6878bf0ccac8fa0632d1776c77aed39b9ccf3ad31: Status 404 returned error can't find the container with id f6a07d757412f7eeae096bb6878bf0ccac8fa0632d1776c77aed39b9ccf3ad31 Jan 23 06:45:44 crc kubenswrapper[4937]: I0123 06:45:44.437998 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-q8swq"] Jan 23 06:45:44 crc kubenswrapper[4937]: W0123 06:45:44.443633 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22215e13_9220_494c_8402_aeb857926025.slice/crio-181b7e34d86e762cf5b8ea902317481d701e336edbac9b74e1797a6f706d604f WatchSource:0}: Error finding container 181b7e34d86e762cf5b8ea902317481d701e336edbac9b74e1797a6f706d604f: Status 404 returned error can't find the container with id 181b7e34d86e762cf5b8ea902317481d701e336edbac9b74e1797a6f706d604f Jan 23 06:45:45 crc kubenswrapper[4937]: I0123 06:45:45.163438 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-q8swq" event={"ID":"22215e13-9220-494c-8402-aeb857926025","Type":"ContainerStarted","Data":"181b7e34d86e762cf5b8ea902317481d701e336edbac9b74e1797a6f706d604f"} Jan 23 06:45:45 crc kubenswrapper[4937]: I0123 06:45:45.165034 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4" event={"ID":"dcf60bc9-8a7e-493a-a3ee-c7e83e84c59a","Type":"ContainerStarted","Data":"f6a07d757412f7eeae096bb6878bf0ccac8fa0632d1776c77aed39b9ccf3ad31"} Jan 23 06:45:54 crc kubenswrapper[4937]: I0123 06:45:54.096721 4937 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 06:45:56 crc kubenswrapper[4937]: I0123 06:45:56.241538 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4" event={"ID":"dcf60bc9-8a7e-493a-a3ee-c7e83e84c59a","Type":"ContainerStarted","Data":"8150f7c9c5842711e12b9a9f45fdd8dbbe32aa38ec78f1e04d3bd18aa3dc3665"} Jan 23 06:45:56 crc kubenswrapper[4937]: I0123 06:45:56.243511 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwb6t" event={"ID":"bb87646f-fb1a-4ec8-9d5f-6aeb9fdbb8c5","Type":"ContainerStarted","Data":"5f1ea1f77df445c2b0a703387a87e4232221d13cef854ea34880d318a42e29b5"} Jan 23 06:45:56 crc kubenswrapper[4937]: I0123 06:45:56.245977 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-htmfn" event={"ID":"6c905d28-0ee0-4fb3-8ee4-2268d65d9626","Type":"ContainerStarted","Data":"6dce25ef2297f3cbe06942e83a2ebf6a8d3ec546330e2ef709c3d5508b0aa8a3"} Jan 23 06:45:56 crc kubenswrapper[4937]: I0123 06:45:56.246286 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-htmfn" Jan 23 06:45:56 crc kubenswrapper[4937]: I0123 06:45:56.247523 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-q8swq" event={"ID":"22215e13-9220-494c-8402-aeb857926025","Type":"ContainerStarted","Data":"304853fdcade0b8ae8fec1e6882206391947ef5ac42693f20f5ecba627aa52ab"} Jan 23 06:45:56 crc kubenswrapper[4937]: I0123 06:45:56.247771 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-q8swq" Jan 23 06:45:56 crc kubenswrapper[4937]: I0123 06:45:56.249761 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-htmfn" Jan 23 06:45:56 crc kubenswrapper[4937]: I0123 06:45:56.250001 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9" event={"ID":"6b2901ce-8ec3-48a2-956f-bb0dfb4a023f","Type":"ContainerStarted","Data":"d80db99e6e3d554d529a8b63e3a9fa1aafe8eab1f6a902a87db23d5aade56d98"} Jan 23 06:45:56 crc kubenswrapper[4937]: I0123 06:45:56.267907 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4" podStartSLOduration=1.976125239 podStartE2EDuration="13.267886139s" podCreationTimestamp="2026-01-23 06:45:43 +0000 UTC" firstStartedPulling="2026-01-23 06:45:44.187935849 +0000 UTC m=+743.991702502" lastFinishedPulling="2026-01-23 06:45:55.479696739 +0000 UTC m=+755.283463402" observedRunningTime="2026-01-23 06:45:56.26026313 +0000 UTC m=+756.064029783" watchObservedRunningTime="2026-01-23 06:45:56.267886139 +0000 UTC m=+756.071652792" Jan 23 06:45:56 crc kubenswrapper[4937]: I0123 06:45:56.294189 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cwb6t" podStartSLOduration=1.591784932 podStartE2EDuration="13.294160732s" podCreationTimestamp="2026-01-23 06:45:43 +0000 UTC" firstStartedPulling="2026-01-23 06:45:43.776016854 +0000 UTC m=+743.579783507" lastFinishedPulling="2026-01-23 06:45:55.478392644 +0000 UTC m=+755.282159307" observedRunningTime="2026-01-23 06:45:56.289926275 +0000 UTC m=+756.093692938" watchObservedRunningTime="2026-01-23 06:45:56.294160732 +0000 UTC m=+756.097927395" Jan 23 06:45:56 crc kubenswrapper[4937]: I0123 06:45:56.339391 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9" podStartSLOduration=1.8158003809999999 podStartE2EDuration="13.339378105s" podCreationTimestamp="2026-01-23 06:45:43 +0000 UTC" firstStartedPulling="2026-01-23 06:45:43.955637482 +0000 UTC m=+743.759404135" lastFinishedPulling="2026-01-23 06:45:55.479215166 +0000 UTC m=+755.282981859" observedRunningTime="2026-01-23 06:45:56.33699814 +0000 UTC m=+756.140764793" watchObservedRunningTime="2026-01-23 06:45:56.339378105 +0000 UTC m=+756.143144758" Jan 23 06:45:56 crc kubenswrapper[4937]: I0123 06:45:56.358393 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-q8swq" podStartSLOduration=2.325338251 podStartE2EDuration="13.358364607s" podCreationTimestamp="2026-01-23 06:45:43 +0000 UTC" firstStartedPulling="2026-01-23 06:45:44.447144636 +0000 UTC m=+744.250911289" lastFinishedPulling="2026-01-23 06:45:55.480170972 +0000 UTC m=+755.283937645" observedRunningTime="2026-01-23 06:45:56.355808197 +0000 UTC m=+756.159574850" watchObservedRunningTime="2026-01-23 06:45:56.358364607 +0000 UTC m=+756.162131260" Jan 23 06:45:56 crc kubenswrapper[4937]: I0123 06:45:56.405099 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-htmfn" podStartSLOduration=1.99402144 podStartE2EDuration="13.405080421s" podCreationTimestamp="2026-01-23 06:45:43 +0000 UTC" firstStartedPulling="2026-01-23 06:45:44.124203696 +0000 UTC m=+743.927970349" lastFinishedPulling="2026-01-23 06:45:55.535262657 +0000 UTC m=+755.339029330" observedRunningTime="2026-01-23 06:45:56.402428149 +0000 UTC m=+756.206194802" watchObservedRunningTime="2026-01-23 06:45:56.405080421 +0000 UTC m=+756.208847074" Jan 23 06:46:03 crc kubenswrapper[4937]: I0123 06:46:03.897879 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-q8swq" Jan 23 06:46:22 crc kubenswrapper[4937]: I0123 06:46:22.935639 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t"] Jan 23 06:46:22 crc kubenswrapper[4937]: I0123 06:46:22.937505 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t" Jan 23 06:46:22 crc kubenswrapper[4937]: I0123 06:46:22.942521 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 06:46:22 crc kubenswrapper[4937]: I0123 06:46:22.954059 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t"] Jan 23 06:46:23 crc kubenswrapper[4937]: I0123 06:46:23.105574 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt8l6\" (UniqueName: \"kubernetes.io/projected/8aabb2ce-fb24-40a6-9e87-51d402e08895-kube-api-access-dt8l6\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t\" (UID: \"8aabb2ce-fb24-40a6-9e87-51d402e08895\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t" Jan 23 06:46:23 crc kubenswrapper[4937]: I0123 06:46:23.106012 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8aabb2ce-fb24-40a6-9e87-51d402e08895-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t\" (UID: \"8aabb2ce-fb24-40a6-9e87-51d402e08895\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t" Jan 23 06:46:23 crc kubenswrapper[4937]: I0123 06:46:23.106062 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8aabb2ce-fb24-40a6-9e87-51d402e08895-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t\" (UID: \"8aabb2ce-fb24-40a6-9e87-51d402e08895\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t" Jan 23 06:46:23 crc kubenswrapper[4937]: I0123 06:46:23.206731 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt8l6\" (UniqueName: \"kubernetes.io/projected/8aabb2ce-fb24-40a6-9e87-51d402e08895-kube-api-access-dt8l6\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t\" (UID: \"8aabb2ce-fb24-40a6-9e87-51d402e08895\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t" Jan 23 06:46:23 crc kubenswrapper[4937]: I0123 06:46:23.207074 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8aabb2ce-fb24-40a6-9e87-51d402e08895-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t\" (UID: \"8aabb2ce-fb24-40a6-9e87-51d402e08895\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t" Jan 23 06:46:23 crc kubenswrapper[4937]: I0123 06:46:23.207206 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8aabb2ce-fb24-40a6-9e87-51d402e08895-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t\" (UID: \"8aabb2ce-fb24-40a6-9e87-51d402e08895\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t" Jan 23 06:46:23 crc kubenswrapper[4937]: I0123 06:46:23.207541 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8aabb2ce-fb24-40a6-9e87-51d402e08895-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t\" (UID: \"8aabb2ce-fb24-40a6-9e87-51d402e08895\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t" Jan 23 06:46:23 crc kubenswrapper[4937]: I0123 06:46:23.208183 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8aabb2ce-fb24-40a6-9e87-51d402e08895-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t\" (UID: \"8aabb2ce-fb24-40a6-9e87-51d402e08895\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t" Jan 23 06:46:23 crc kubenswrapper[4937]: I0123 06:46:23.232430 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt8l6\" (UniqueName: \"kubernetes.io/projected/8aabb2ce-fb24-40a6-9e87-51d402e08895-kube-api-access-dt8l6\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t\" (UID: \"8aabb2ce-fb24-40a6-9e87-51d402e08895\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t" Jan 23 06:46:23 crc kubenswrapper[4937]: I0123 06:46:23.256215 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t" Jan 23 06:46:23 crc kubenswrapper[4937]: I0123 06:46:23.524797 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t"] Jan 23 06:46:23 crc kubenswrapper[4937]: W0123 06:46:23.531046 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8aabb2ce_fb24_40a6_9e87_51d402e08895.slice/crio-9ece3389f49262edf1e3d62a0da6e445cba8bc689041d74adcf5f39198acb571 WatchSource:0}: Error finding container 9ece3389f49262edf1e3d62a0da6e445cba8bc689041d74adcf5f39198acb571: Status 404 returned error can't find the container with id 9ece3389f49262edf1e3d62a0da6e445cba8bc689041d74adcf5f39198acb571 Jan 23 06:46:24 crc kubenswrapper[4937]: I0123 06:46:24.425824 4937 generic.go:334] "Generic (PLEG): container finished" podID="8aabb2ce-fb24-40a6-9e87-51d402e08895" containerID="16a80468d88f1fd5febca1725ab876377bfd33eb0808c4af719bbaf92cc7245b" exitCode=0 Jan 23 06:46:24 crc kubenswrapper[4937]: I0123 06:46:24.425936 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t" event={"ID":"8aabb2ce-fb24-40a6-9e87-51d402e08895","Type":"ContainerDied","Data":"16a80468d88f1fd5febca1725ab876377bfd33eb0808c4af719bbaf92cc7245b"} Jan 23 06:46:24 crc kubenswrapper[4937]: I0123 06:46:24.426004 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t" event={"ID":"8aabb2ce-fb24-40a6-9e87-51d402e08895","Type":"ContainerStarted","Data":"9ece3389f49262edf1e3d62a0da6e445cba8bc689041d74adcf5f39198acb571"} Jan 23 06:46:25 crc kubenswrapper[4937]: I0123 06:46:25.285120 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cqtb7"] Jan 23 06:46:25 crc kubenswrapper[4937]: I0123 06:46:25.287268 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqtb7" Jan 23 06:46:25 crc kubenswrapper[4937]: I0123 06:46:25.305915 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cqtb7"] Jan 23 06:46:25 crc kubenswrapper[4937]: I0123 06:46:25.339523 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e11507b-08c7-451e-b2b1-5b5c1d7715f5-catalog-content\") pod \"redhat-operators-cqtb7\" (UID: \"2e11507b-08c7-451e-b2b1-5b5c1d7715f5\") " pod="openshift-marketplace/redhat-operators-cqtb7" Jan 23 06:46:25 crc kubenswrapper[4937]: I0123 06:46:25.339616 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk2lf\" (UniqueName: \"kubernetes.io/projected/2e11507b-08c7-451e-b2b1-5b5c1d7715f5-kube-api-access-dk2lf\") pod \"redhat-operators-cqtb7\" (UID: \"2e11507b-08c7-451e-b2b1-5b5c1d7715f5\") " pod="openshift-marketplace/redhat-operators-cqtb7" Jan 23 06:46:25 crc kubenswrapper[4937]: I0123 06:46:25.339642 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e11507b-08c7-451e-b2b1-5b5c1d7715f5-utilities\") pod \"redhat-operators-cqtb7\" (UID: \"2e11507b-08c7-451e-b2b1-5b5c1d7715f5\") " pod="openshift-marketplace/redhat-operators-cqtb7" Jan 23 06:46:25 crc kubenswrapper[4937]: I0123 06:46:25.440384 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e11507b-08c7-451e-b2b1-5b5c1d7715f5-catalog-content\") pod \"redhat-operators-cqtb7\" (UID: \"2e11507b-08c7-451e-b2b1-5b5c1d7715f5\") " pod="openshift-marketplace/redhat-operators-cqtb7" Jan 23 06:46:25 crc kubenswrapper[4937]: I0123 06:46:25.440437 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk2lf\" (UniqueName: \"kubernetes.io/projected/2e11507b-08c7-451e-b2b1-5b5c1d7715f5-kube-api-access-dk2lf\") pod \"redhat-operators-cqtb7\" (UID: \"2e11507b-08c7-451e-b2b1-5b5c1d7715f5\") " pod="openshift-marketplace/redhat-operators-cqtb7" Jan 23 06:46:25 crc kubenswrapper[4937]: I0123 06:46:25.440458 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e11507b-08c7-451e-b2b1-5b5c1d7715f5-utilities\") pod \"redhat-operators-cqtb7\" (UID: \"2e11507b-08c7-451e-b2b1-5b5c1d7715f5\") " pod="openshift-marketplace/redhat-operators-cqtb7" Jan 23 06:46:25 crc kubenswrapper[4937]: I0123 06:46:25.440940 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e11507b-08c7-451e-b2b1-5b5c1d7715f5-utilities\") pod \"redhat-operators-cqtb7\" (UID: \"2e11507b-08c7-451e-b2b1-5b5c1d7715f5\") " pod="openshift-marketplace/redhat-operators-cqtb7" Jan 23 06:46:25 crc kubenswrapper[4937]: I0123 06:46:25.441260 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e11507b-08c7-451e-b2b1-5b5c1d7715f5-catalog-content\") pod \"redhat-operators-cqtb7\" (UID: \"2e11507b-08c7-451e-b2b1-5b5c1d7715f5\") " pod="openshift-marketplace/redhat-operators-cqtb7" Jan 23 06:46:25 crc kubenswrapper[4937]: I0123 06:46:25.465015 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk2lf\" (UniqueName: \"kubernetes.io/projected/2e11507b-08c7-451e-b2b1-5b5c1d7715f5-kube-api-access-dk2lf\") pod \"redhat-operators-cqtb7\" (UID: \"2e11507b-08c7-451e-b2b1-5b5c1d7715f5\") " pod="openshift-marketplace/redhat-operators-cqtb7" Jan 23 06:46:25 crc kubenswrapper[4937]: I0123 06:46:25.611712 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqtb7" Jan 23 06:46:26 crc kubenswrapper[4937]: I0123 06:46:26.120262 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cqtb7"] Jan 23 06:46:26 crc kubenswrapper[4937]: W0123 06:46:26.126920 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e11507b_08c7_451e_b2b1_5b5c1d7715f5.slice/crio-20871b83e06b4504a6e4e0907e24c307c263a5717d25cb0fb6593e9fd3ec0260 WatchSource:0}: Error finding container 20871b83e06b4504a6e4e0907e24c307c263a5717d25cb0fb6593e9fd3ec0260: Status 404 returned error can't find the container with id 20871b83e06b4504a6e4e0907e24c307c263a5717d25cb0fb6593e9fd3ec0260 Jan 23 06:46:26 crc kubenswrapper[4937]: I0123 06:46:26.438843 4937 generic.go:334] "Generic (PLEG): container finished" podID="8aabb2ce-fb24-40a6-9e87-51d402e08895" containerID="761c23e8f9411b41aad038f4fc55bb4cfa6950677a00074fa5ab343c08dfe69a" exitCode=0 Jan 23 06:46:26 crc kubenswrapper[4937]: I0123 06:46:26.438951 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t" event={"ID":"8aabb2ce-fb24-40a6-9e87-51d402e08895","Type":"ContainerDied","Data":"761c23e8f9411b41aad038f4fc55bb4cfa6950677a00074fa5ab343c08dfe69a"} Jan 23 06:46:26 crc kubenswrapper[4937]: I0123 06:46:26.440401 4937 generic.go:334] "Generic (PLEG): container finished" podID="2e11507b-08c7-451e-b2b1-5b5c1d7715f5" containerID="843d1b40f84b88e7863738d37549ef37bebe230cd62b02d374d0d84859dbd775" exitCode=0 Jan 23 06:46:26 crc kubenswrapper[4937]: I0123 06:46:26.440442 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqtb7" event={"ID":"2e11507b-08c7-451e-b2b1-5b5c1d7715f5","Type":"ContainerDied","Data":"843d1b40f84b88e7863738d37549ef37bebe230cd62b02d374d0d84859dbd775"} Jan 23 06:46:26 crc kubenswrapper[4937]: I0123 06:46:26.440464 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqtb7" event={"ID":"2e11507b-08c7-451e-b2b1-5b5c1d7715f5","Type":"ContainerStarted","Data":"20871b83e06b4504a6e4e0907e24c307c263a5717d25cb0fb6593e9fd3ec0260"} Jan 23 06:46:27 crc kubenswrapper[4937]: I0123 06:46:27.451242 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqtb7" event={"ID":"2e11507b-08c7-451e-b2b1-5b5c1d7715f5","Type":"ContainerStarted","Data":"65a8b91f95bcb6d05b28b1cb3f94e110f06255d580b775f02fb378d51aa15595"} Jan 23 06:46:27 crc kubenswrapper[4937]: I0123 06:46:27.454659 4937 generic.go:334] "Generic (PLEG): container finished" podID="8aabb2ce-fb24-40a6-9e87-51d402e08895" containerID="8780bd2f3d75dbff2653a0beda2ae3404c1316a9e77ad658a5770a68926cb4cd" exitCode=0 Jan 23 06:46:27 crc kubenswrapper[4937]: I0123 06:46:27.454691 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t" event={"ID":"8aabb2ce-fb24-40a6-9e87-51d402e08895","Type":"ContainerDied","Data":"8780bd2f3d75dbff2653a0beda2ae3404c1316a9e77ad658a5770a68926cb4cd"} Jan 23 06:46:28 crc kubenswrapper[4937]: I0123 06:46:28.463886 4937 generic.go:334] "Generic (PLEG): container finished" podID="2e11507b-08c7-451e-b2b1-5b5c1d7715f5" containerID="65a8b91f95bcb6d05b28b1cb3f94e110f06255d580b775f02fb378d51aa15595" exitCode=0 Jan 23 06:46:28 crc kubenswrapper[4937]: I0123 06:46:28.463924 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqtb7" event={"ID":"2e11507b-08c7-451e-b2b1-5b5c1d7715f5","Type":"ContainerDied","Data":"65a8b91f95bcb6d05b28b1cb3f94e110f06255d580b775f02fb378d51aa15595"} Jan 23 06:46:28 crc kubenswrapper[4937]: I0123 06:46:28.753635 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t" Jan 23 06:46:28 crc kubenswrapper[4937]: I0123 06:46:28.886305 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt8l6\" (UniqueName: \"kubernetes.io/projected/8aabb2ce-fb24-40a6-9e87-51d402e08895-kube-api-access-dt8l6\") pod \"8aabb2ce-fb24-40a6-9e87-51d402e08895\" (UID: \"8aabb2ce-fb24-40a6-9e87-51d402e08895\") " Jan 23 06:46:28 crc kubenswrapper[4937]: I0123 06:46:28.886384 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8aabb2ce-fb24-40a6-9e87-51d402e08895-util\") pod \"8aabb2ce-fb24-40a6-9e87-51d402e08895\" (UID: \"8aabb2ce-fb24-40a6-9e87-51d402e08895\") " Jan 23 06:46:28 crc kubenswrapper[4937]: I0123 06:46:28.886416 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8aabb2ce-fb24-40a6-9e87-51d402e08895-bundle\") pod \"8aabb2ce-fb24-40a6-9e87-51d402e08895\" (UID: \"8aabb2ce-fb24-40a6-9e87-51d402e08895\") " Jan 23 06:46:28 crc kubenswrapper[4937]: I0123 06:46:28.887373 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8aabb2ce-fb24-40a6-9e87-51d402e08895-bundle" (OuterVolumeSpecName: "bundle") pod "8aabb2ce-fb24-40a6-9e87-51d402e08895" (UID: "8aabb2ce-fb24-40a6-9e87-51d402e08895"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:46:28 crc kubenswrapper[4937]: I0123 06:46:28.895510 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aabb2ce-fb24-40a6-9e87-51d402e08895-kube-api-access-dt8l6" (OuterVolumeSpecName: "kube-api-access-dt8l6") pod "8aabb2ce-fb24-40a6-9e87-51d402e08895" (UID: "8aabb2ce-fb24-40a6-9e87-51d402e08895"). InnerVolumeSpecName "kube-api-access-dt8l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:46:28 crc kubenswrapper[4937]: I0123 06:46:28.988243 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt8l6\" (UniqueName: \"kubernetes.io/projected/8aabb2ce-fb24-40a6-9e87-51d402e08895-kube-api-access-dt8l6\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:28 crc kubenswrapper[4937]: I0123 06:46:28.988484 4937 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8aabb2ce-fb24-40a6-9e87-51d402e08895-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:29 crc kubenswrapper[4937]: I0123 06:46:29.227756 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8aabb2ce-fb24-40a6-9e87-51d402e08895-util" (OuterVolumeSpecName: "util") pod "8aabb2ce-fb24-40a6-9e87-51d402e08895" (UID: "8aabb2ce-fb24-40a6-9e87-51d402e08895"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:46:29 crc kubenswrapper[4937]: I0123 06:46:29.290630 4937 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8aabb2ce-fb24-40a6-9e87-51d402e08895-util\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:29 crc kubenswrapper[4937]: I0123 06:46:29.473065 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t" Jan 23 06:46:29 crc kubenswrapper[4937]: I0123 06:46:29.473065 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t" event={"ID":"8aabb2ce-fb24-40a6-9e87-51d402e08895","Type":"ContainerDied","Data":"9ece3389f49262edf1e3d62a0da6e445cba8bc689041d74adcf5f39198acb571"} Jan 23 06:46:29 crc kubenswrapper[4937]: I0123 06:46:29.473363 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ece3389f49262edf1e3d62a0da6e445cba8bc689041d74adcf5f39198acb571" Jan 23 06:46:29 crc kubenswrapper[4937]: I0123 06:46:29.477390 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqtb7" event={"ID":"2e11507b-08c7-451e-b2b1-5b5c1d7715f5","Type":"ContainerStarted","Data":"0019c8d32f11349b718c4187a2b6d702dd06603e782525b685063d6cee3cd243"} Jan 23 06:46:29 crc kubenswrapper[4937]: I0123 06:46:29.503973 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cqtb7" podStartSLOduration=1.664397667 podStartE2EDuration="4.503948476s" podCreationTimestamp="2026-01-23 06:46:25 +0000 UTC" firstStartedPulling="2026-01-23 06:46:26.441472581 +0000 UTC m=+786.245239254" lastFinishedPulling="2026-01-23 06:46:29.28102341 +0000 UTC m=+789.084790063" observedRunningTime="2026-01-23 06:46:29.49958113 +0000 UTC m=+789.303347803" watchObservedRunningTime="2026-01-23 06:46:29.503948476 +0000 UTC m=+789.307715169" Jan 23 06:46:32 crc kubenswrapper[4937]: I0123 06:46:32.222743 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-ddc6r"] Jan 23 06:46:32 crc kubenswrapper[4937]: E0123 06:46:32.223350 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aabb2ce-fb24-40a6-9e87-51d402e08895" containerName="pull" Jan 23 06:46:32 crc kubenswrapper[4937]: I0123 06:46:32.223370 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aabb2ce-fb24-40a6-9e87-51d402e08895" containerName="pull" Jan 23 06:46:32 crc kubenswrapper[4937]: E0123 06:46:32.223416 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aabb2ce-fb24-40a6-9e87-51d402e08895" containerName="util" Jan 23 06:46:32 crc kubenswrapper[4937]: I0123 06:46:32.223429 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aabb2ce-fb24-40a6-9e87-51d402e08895" containerName="util" Jan 23 06:46:32 crc kubenswrapper[4937]: E0123 06:46:32.223451 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aabb2ce-fb24-40a6-9e87-51d402e08895" containerName="extract" Jan 23 06:46:32 crc kubenswrapper[4937]: I0123 06:46:32.223466 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aabb2ce-fb24-40a6-9e87-51d402e08895" containerName="extract" Jan 23 06:46:32 crc kubenswrapper[4937]: I0123 06:46:32.223673 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aabb2ce-fb24-40a6-9e87-51d402e08895" containerName="extract" Jan 23 06:46:32 crc kubenswrapper[4937]: I0123 06:46:32.224300 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-ddc6r" Jan 23 06:46:32 crc kubenswrapper[4937]: I0123 06:46:32.226654 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 23 06:46:32 crc kubenswrapper[4937]: I0123 06:46:32.226845 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-zgb7k" Jan 23 06:46:32 crc kubenswrapper[4937]: I0123 06:46:32.228045 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 23 06:46:32 crc kubenswrapper[4937]: I0123 06:46:32.238212 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-ddc6r"] Jan 23 06:46:32 crc kubenswrapper[4937]: I0123 06:46:32.329840 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x5nk\" (UniqueName: \"kubernetes.io/projected/d4cc3ae1-2cf6-4ca0-a32b-ffb846bd2036-kube-api-access-6x5nk\") pod \"nmstate-operator-646758c888-ddc6r\" (UID: \"d4cc3ae1-2cf6-4ca0-a32b-ffb846bd2036\") " pod="openshift-nmstate/nmstate-operator-646758c888-ddc6r" Jan 23 06:46:32 crc kubenswrapper[4937]: I0123 06:46:32.430804 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x5nk\" (UniqueName: \"kubernetes.io/projected/d4cc3ae1-2cf6-4ca0-a32b-ffb846bd2036-kube-api-access-6x5nk\") pod \"nmstate-operator-646758c888-ddc6r\" (UID: \"d4cc3ae1-2cf6-4ca0-a32b-ffb846bd2036\") " pod="openshift-nmstate/nmstate-operator-646758c888-ddc6r" Jan 23 06:46:32 crc kubenswrapper[4937]: I0123 06:46:32.460698 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x5nk\" (UniqueName: \"kubernetes.io/projected/d4cc3ae1-2cf6-4ca0-a32b-ffb846bd2036-kube-api-access-6x5nk\") pod \"nmstate-operator-646758c888-ddc6r\" (UID: \"d4cc3ae1-2cf6-4ca0-a32b-ffb846bd2036\") " pod="openshift-nmstate/nmstate-operator-646758c888-ddc6r" Jan 23 06:46:32 crc kubenswrapper[4937]: I0123 06:46:32.540108 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-ddc6r" Jan 23 06:46:32 crc kubenswrapper[4937]: I0123 06:46:32.735372 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-ddc6r"] Jan 23 06:46:32 crc kubenswrapper[4937]: W0123 06:46:32.743409 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4cc3ae1_2cf6_4ca0_a32b_ffb846bd2036.slice/crio-5c8d70180f83260d80e87bc21c48d5e87ebd015b609972427877ef54964fbebf WatchSource:0}: Error finding container 5c8d70180f83260d80e87bc21c48d5e87ebd015b609972427877ef54964fbebf: Status 404 returned error can't find the container with id 5c8d70180f83260d80e87bc21c48d5e87ebd015b609972427877ef54964fbebf Jan 23 06:46:33 crc kubenswrapper[4937]: I0123 06:46:33.523012 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-ddc6r" event={"ID":"d4cc3ae1-2cf6-4ca0-a32b-ffb846bd2036","Type":"ContainerStarted","Data":"5c8d70180f83260d80e87bc21c48d5e87ebd015b609972427877ef54964fbebf"} Jan 23 06:46:35 crc kubenswrapper[4937]: I0123 06:46:35.612217 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cqtb7" Jan 23 06:46:35 crc kubenswrapper[4937]: I0123 06:46:35.612276 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cqtb7" Jan 23 06:46:36 crc kubenswrapper[4937]: I0123 06:46:36.550137 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-ddc6r" event={"ID":"d4cc3ae1-2cf6-4ca0-a32b-ffb846bd2036","Type":"ContainerStarted","Data":"e5dfccda0dada538467cf4aaf931c8982b3986c4e1c6694db44002e75839ec5d"} Jan 23 06:46:36 crc kubenswrapper[4937]: I0123 06:46:36.581360 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-ddc6r" podStartSLOduration=1.267921242 podStartE2EDuration="4.581339558s" podCreationTimestamp="2026-01-23 06:46:32 +0000 UTC" firstStartedPulling="2026-01-23 06:46:32.749314504 +0000 UTC m=+792.553081167" lastFinishedPulling="2026-01-23 06:46:36.06273283 +0000 UTC m=+795.866499483" observedRunningTime="2026-01-23 06:46:36.578935774 +0000 UTC m=+796.382702457" watchObservedRunningTime="2026-01-23 06:46:36.581339558 +0000 UTC m=+796.385106221" Jan 23 06:46:36 crc kubenswrapper[4937]: I0123 06:46:36.658509 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cqtb7" podUID="2e11507b-08c7-451e-b2b1-5b5c1d7715f5" containerName="registry-server" probeResult="failure" output=< Jan 23 06:46:36 crc kubenswrapper[4937]: timeout: failed to connect service ":50051" within 1s Jan 23 06:46:36 crc kubenswrapper[4937]: > Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.665930 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-s7f9b"] Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.667029 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-s7f9b" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.688044 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-s9kqz" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.693054 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-c6htp"] Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.693907 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c6htp" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.699076 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.714671 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-tsh7l"] Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.716056 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tsh7l" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.720635 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-c6htp"] Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.724070 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.724122 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.757629 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-s7f9b"] Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.808947 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcg5f\" (UniqueName: \"kubernetes.io/projected/708f45a9-e78f-4bfd-8036-2ee32896def2-kube-api-access-fcg5f\") pod \"nmstate-metrics-54757c584b-s7f9b\" (UID: \"708f45a9-e78f-4bfd-8036-2ee32896def2\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-s7f9b" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.809001 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78-dbus-socket\") pod \"nmstate-handler-tsh7l\" (UID: \"ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78\") " pod="openshift-nmstate/nmstate-handler-tsh7l" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.809052 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qmbp\" (UniqueName: \"kubernetes.io/projected/ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78-kube-api-access-4qmbp\") pod \"nmstate-handler-tsh7l\" (UID: \"ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78\") " pod="openshift-nmstate/nmstate-handler-tsh7l" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.809100 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78-ovs-socket\") pod \"nmstate-handler-tsh7l\" (UID: \"ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78\") " pod="openshift-nmstate/nmstate-handler-tsh7l" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.809169 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx4gp\" (UniqueName: \"kubernetes.io/projected/505e84ed-8479-4fee-b1f9-e660209e9f6a-kube-api-access-kx4gp\") pod \"nmstate-webhook-8474b5b9d8-c6htp\" (UID: \"505e84ed-8479-4fee-b1f9-e660209e9f6a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c6htp" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.809246 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78-nmstate-lock\") pod \"nmstate-handler-tsh7l\" (UID: \"ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78\") " pod="openshift-nmstate/nmstate-handler-tsh7l" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.809310 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/505e84ed-8479-4fee-b1f9-e660209e9f6a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-c6htp\" (UID: \"505e84ed-8479-4fee-b1f9-e660209e9f6a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c6htp" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.809845 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-j4rhv"] Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.810475 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-j4rhv" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.813419 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.813507 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-584f2" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.813654 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.836097 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-j4rhv"] Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.910107 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3be187b8-4f0a-4298-bfd6-e03586c755ef-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-j4rhv\" (UID: \"3be187b8-4f0a-4298-bfd6-e03586c755ef\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-j4rhv" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.910164 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcg5f\" (UniqueName: \"kubernetes.io/projected/708f45a9-e78f-4bfd-8036-2ee32896def2-kube-api-access-fcg5f\") pod \"nmstate-metrics-54757c584b-s7f9b\" (UID: \"708f45a9-e78f-4bfd-8036-2ee32896def2\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-s7f9b" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.910185 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78-dbus-socket\") pod \"nmstate-handler-tsh7l\" (UID: \"ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78\") " pod="openshift-nmstate/nmstate-handler-tsh7l" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.910224 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qmbp\" (UniqueName: \"kubernetes.io/projected/ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78-kube-api-access-4qmbp\") pod \"nmstate-handler-tsh7l\" (UID: \"ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78\") " pod="openshift-nmstate/nmstate-handler-tsh7l" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.910244 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78-ovs-socket\") pod \"nmstate-handler-tsh7l\" (UID: \"ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78\") " pod="openshift-nmstate/nmstate-handler-tsh7l" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.910264 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx4gp\" (UniqueName: \"kubernetes.io/projected/505e84ed-8479-4fee-b1f9-e660209e9f6a-kube-api-access-kx4gp\") pod \"nmstate-webhook-8474b5b9d8-c6htp\" (UID: \"505e84ed-8479-4fee-b1f9-e660209e9f6a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c6htp" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.910284 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3be187b8-4f0a-4298-bfd6-e03586c755ef-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-j4rhv\" (UID: \"3be187b8-4f0a-4298-bfd6-e03586c755ef\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-j4rhv" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.910308 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78-nmstate-lock\") pod \"nmstate-handler-tsh7l\" (UID: \"ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78\") " pod="openshift-nmstate/nmstate-handler-tsh7l" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.910326 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/505e84ed-8479-4fee-b1f9-e660209e9f6a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-c6htp\" (UID: \"505e84ed-8479-4fee-b1f9-e660209e9f6a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c6htp" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.910353 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks2pl\" (UniqueName: \"kubernetes.io/projected/3be187b8-4f0a-4298-bfd6-e03586c755ef-kube-api-access-ks2pl\") pod \"nmstate-console-plugin-7754f76f8b-j4rhv\" (UID: \"3be187b8-4f0a-4298-bfd6-e03586c755ef\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-j4rhv" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.910565 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78-dbus-socket\") pod \"nmstate-handler-tsh7l\" (UID: \"ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78\") " pod="openshift-nmstate/nmstate-handler-tsh7l" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.910625 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78-ovs-socket\") pod \"nmstate-handler-tsh7l\" (UID: \"ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78\") " pod="openshift-nmstate/nmstate-handler-tsh7l" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.910665 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78-nmstate-lock\") pod \"nmstate-handler-tsh7l\" (UID: \"ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78\") " pod="openshift-nmstate/nmstate-handler-tsh7l" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.915837 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/505e84ed-8479-4fee-b1f9-e660209e9f6a-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-c6htp\" (UID: \"505e84ed-8479-4fee-b1f9-e660209e9f6a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c6htp" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.925878 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx4gp\" (UniqueName: \"kubernetes.io/projected/505e84ed-8479-4fee-b1f9-e660209e9f6a-kube-api-access-kx4gp\") pod \"nmstate-webhook-8474b5b9d8-c6htp\" (UID: \"505e84ed-8479-4fee-b1f9-e660209e9f6a\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c6htp" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.930093 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcg5f\" (UniqueName: \"kubernetes.io/projected/708f45a9-e78f-4bfd-8036-2ee32896def2-kube-api-access-fcg5f\") pod \"nmstate-metrics-54757c584b-s7f9b\" (UID: \"708f45a9-e78f-4bfd-8036-2ee32896def2\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-s7f9b" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.934130 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qmbp\" (UniqueName: \"kubernetes.io/projected/ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78-kube-api-access-4qmbp\") pod \"nmstate-handler-tsh7l\" (UID: \"ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78\") " pod="openshift-nmstate/nmstate-handler-tsh7l" Jan 23 06:46:37 crc kubenswrapper[4937]: I0123 06:46:37.990875 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-s7f9b" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.010913 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks2pl\" (UniqueName: \"kubernetes.io/projected/3be187b8-4f0a-4298-bfd6-e03586c755ef-kube-api-access-ks2pl\") pod \"nmstate-console-plugin-7754f76f8b-j4rhv\" (UID: \"3be187b8-4f0a-4298-bfd6-e03586c755ef\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-j4rhv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.011076 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3be187b8-4f0a-4298-bfd6-e03586c755ef-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-j4rhv\" (UID: \"3be187b8-4f0a-4298-bfd6-e03586c755ef\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-j4rhv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.011171 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3be187b8-4f0a-4298-bfd6-e03586c755ef-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-j4rhv\" (UID: \"3be187b8-4f0a-4298-bfd6-e03586c755ef\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-j4rhv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.011974 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3be187b8-4f0a-4298-bfd6-e03586c755ef-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-j4rhv\" (UID: \"3be187b8-4f0a-4298-bfd6-e03586c755ef\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-j4rhv" Jan 23 06:46:38 crc kubenswrapper[4937]: E0123 06:46:38.012335 4937 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 23 06:46:38 crc kubenswrapper[4937]: E0123 06:46:38.012430 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3be187b8-4f0a-4298-bfd6-e03586c755ef-plugin-serving-cert podName:3be187b8-4f0a-4298-bfd6-e03586c755ef nodeName:}" failed. No retries permitted until 2026-01-23 06:46:38.512417013 +0000 UTC m=+798.316183666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/3be187b8-4f0a-4298-bfd6-e03586c755ef-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-j4rhv" (UID: "3be187b8-4f0a-4298-bfd6-e03586c755ef") : secret "plugin-serving-cert" not found Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.013372 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7d69557dd4-2qwxv"] Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.014336 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.015098 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c6htp" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.039513 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tsh7l" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.052036 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks2pl\" (UniqueName: \"kubernetes.io/projected/3be187b8-4f0a-4298-bfd6-e03586c755ef-kube-api-access-ks2pl\") pod \"nmstate-console-plugin-7754f76f8b-j4rhv\" (UID: \"3be187b8-4f0a-4298-bfd6-e03586c755ef\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-j4rhv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.052902 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7d69557dd4-2qwxv"] Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.213350 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb15df78-4e9c-4de0-a718-b5a182cee105-console-oauth-config\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.213548 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb15df78-4e9c-4de0-a718-b5a182cee105-oauth-serving-cert\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.213574 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb15df78-4e9c-4de0-a718-b5a182cee105-trusted-ca-bundle\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.213604 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb15df78-4e9c-4de0-a718-b5a182cee105-service-ca\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.213659 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb15df78-4e9c-4de0-a718-b5a182cee105-console-config\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.213686 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb15df78-4e9c-4de0-a718-b5a182cee105-console-serving-cert\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.213700 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzfb6\" (UniqueName: \"kubernetes.io/projected/bb15df78-4e9c-4de0-a718-b5a182cee105-kube-api-access-mzfb6\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.243757 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-s7f9b"] Jan 23 06:46:38 crc kubenswrapper[4937]: W0123 06:46:38.250810 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod708f45a9_e78f_4bfd_8036_2ee32896def2.slice/crio-a966ad616e44258c857b66ae6de51825069720befcf5205b1df282cc9ebf8026 WatchSource:0}: Error finding container a966ad616e44258c857b66ae6de51825069720befcf5205b1df282cc9ebf8026: Status 404 returned error can't find the container with id a966ad616e44258c857b66ae6de51825069720befcf5205b1df282cc9ebf8026 Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.314829 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb15df78-4e9c-4de0-a718-b5a182cee105-console-oauth-config\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.314914 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb15df78-4e9c-4de0-a718-b5a182cee105-oauth-serving-cert\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.314976 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb15df78-4e9c-4de0-a718-b5a182cee105-trusted-ca-bundle\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.315003 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb15df78-4e9c-4de0-a718-b5a182cee105-service-ca\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.315070 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb15df78-4e9c-4de0-a718-b5a182cee105-console-config\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.315165 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb15df78-4e9c-4de0-a718-b5a182cee105-console-serving-cert\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.315188 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzfb6\" (UniqueName: \"kubernetes.io/projected/bb15df78-4e9c-4de0-a718-b5a182cee105-kube-api-access-mzfb6\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.316157 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb15df78-4e9c-4de0-a718-b5a182cee105-trusted-ca-bundle\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.316214 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb15df78-4e9c-4de0-a718-b5a182cee105-service-ca\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.316320 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb15df78-4e9c-4de0-a718-b5a182cee105-oauth-serving-cert\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.317117 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb15df78-4e9c-4de0-a718-b5a182cee105-console-config\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.320173 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb15df78-4e9c-4de0-a718-b5a182cee105-console-oauth-config\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.321041 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb15df78-4e9c-4de0-a718-b5a182cee105-console-serving-cert\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.346249 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzfb6\" (UniqueName: \"kubernetes.io/projected/bb15df78-4e9c-4de0-a718-b5a182cee105-kube-api-access-mzfb6\") pod \"console-7d69557dd4-2qwxv\" (UID: \"bb15df78-4e9c-4de0-a718-b5a182cee105\") " pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.385707 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.424344 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-c6htp"] Jan 23 06:46:38 crc kubenswrapper[4937]: W0123 06:46:38.433717 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod505e84ed_8479_4fee_b1f9_e660209e9f6a.slice/crio-123821aca57d662098c9e12026431656f2701b0e4d5b0896502468d03de5e12c WatchSource:0}: Error finding container 123821aca57d662098c9e12026431656f2701b0e4d5b0896502468d03de5e12c: Status 404 returned error can't find the container with id 123821aca57d662098c9e12026431656f2701b0e4d5b0896502468d03de5e12c Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.517538 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3be187b8-4f0a-4298-bfd6-e03586c755ef-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-j4rhv\" (UID: \"3be187b8-4f0a-4298-bfd6-e03586c755ef\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-j4rhv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.525674 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3be187b8-4f0a-4298-bfd6-e03586c755ef-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-j4rhv\" (UID: \"3be187b8-4f0a-4298-bfd6-e03586c755ef\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-j4rhv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.568368 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tsh7l" event={"ID":"ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78","Type":"ContainerStarted","Data":"ce8470a221e51f5974133b5771ff4fc0f4df62f7a17a9fbc18736c098a486e12"} Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.569749 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c6htp" event={"ID":"505e84ed-8479-4fee-b1f9-e660209e9f6a","Type":"ContainerStarted","Data":"123821aca57d662098c9e12026431656f2701b0e4d5b0896502468d03de5e12c"} Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.571575 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-s7f9b" event={"ID":"708f45a9-e78f-4bfd-8036-2ee32896def2","Type":"ContainerStarted","Data":"a966ad616e44258c857b66ae6de51825069720befcf5205b1df282cc9ebf8026"} Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.741781 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-j4rhv" Jan 23 06:46:38 crc kubenswrapper[4937]: I0123 06:46:38.847831 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7d69557dd4-2qwxv"] Jan 23 06:46:39 crc kubenswrapper[4937]: I0123 06:46:39.019609 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-j4rhv"] Jan 23 06:46:39 crc kubenswrapper[4937]: I0123 06:46:39.582651 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-j4rhv" event={"ID":"3be187b8-4f0a-4298-bfd6-e03586c755ef","Type":"ContainerStarted","Data":"f4668e0a5ab1bf7aa3f8ce7a181e23bf873f9c189b18c3504aff659cf990ed7e"} Jan 23 06:46:39 crc kubenswrapper[4937]: I0123 06:46:39.586021 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7d69557dd4-2qwxv" event={"ID":"bb15df78-4e9c-4de0-a718-b5a182cee105","Type":"ContainerStarted","Data":"0108842b9dd2ce6f95c9cc1b4a7a8b71561b7f8b335cb6d5aab1de38cdd0e9d4"} Jan 23 06:46:40 crc kubenswrapper[4937]: I0123 06:46:40.596477 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7d69557dd4-2qwxv" event={"ID":"bb15df78-4e9c-4de0-a718-b5a182cee105","Type":"ContainerStarted","Data":"01d7e6abc737cd78a3d628d2ba2c3e5916af0ed73d4c004acada1cccd39f5345"} Jan 23 06:46:41 crc kubenswrapper[4937]: I0123 06:46:41.622355 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7d69557dd4-2qwxv" podStartSLOduration=4.622335142 podStartE2EDuration="4.622335142s" podCreationTimestamp="2026-01-23 06:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:46:41.618193262 +0000 UTC m=+801.421959925" watchObservedRunningTime="2026-01-23 06:46:41.622335142 +0000 UTC m=+801.426101805" Jan 23 06:46:42 crc kubenswrapper[4937]: I0123 06:46:42.610162 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c6htp" event={"ID":"505e84ed-8479-4fee-b1f9-e660209e9f6a","Type":"ContainerStarted","Data":"d9600b5f019b67df9aa957b6bd82d96aaeb3124e601263b1145b1dd5ecf9194c"} Jan 23 06:46:42 crc kubenswrapper[4937]: I0123 06:46:42.611152 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c6htp" Jan 23 06:46:42 crc kubenswrapper[4937]: I0123 06:46:42.617427 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-s7f9b" event={"ID":"708f45a9-e78f-4bfd-8036-2ee32896def2","Type":"ContainerStarted","Data":"6aa26ee2079e3d0a7be19209fa72ea8bff5ed1f2c9d472727b3df0739b841c8a"} Jan 23 06:46:43 crc kubenswrapper[4937]: I0123 06:46:43.626804 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-j4rhv" event={"ID":"3be187b8-4f0a-4298-bfd6-e03586c755ef","Type":"ContainerStarted","Data":"c9988524dae9f02f5783cd40516561f61e1c962065bdc2b599c6ba8ae4c98531"} Jan 23 06:46:43 crc kubenswrapper[4937]: I0123 06:46:43.630033 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tsh7l" event={"ID":"ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78","Type":"ContainerStarted","Data":"b947d5e2cca947753e73ffa7925421f19106c58e92e4886dab994d02242fa719"} Jan 23 06:46:43 crc kubenswrapper[4937]: I0123 06:46:43.630270 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-tsh7l" Jan 23 06:46:43 crc kubenswrapper[4937]: I0123 06:46:43.648971 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-j4rhv" podStartSLOduration=2.4043612469999998 podStartE2EDuration="6.6489475s" podCreationTimestamp="2026-01-23 06:46:37 +0000 UTC" firstStartedPulling="2026-01-23 06:46:39.043242587 +0000 UTC m=+798.847009240" lastFinishedPulling="2026-01-23 06:46:43.28782883 +0000 UTC m=+803.091595493" observedRunningTime="2026-01-23 06:46:43.647520862 +0000 UTC m=+803.451287505" watchObservedRunningTime="2026-01-23 06:46:43.6489475 +0000 UTC m=+803.452714183" Jan 23 06:46:43 crc kubenswrapper[4937]: I0123 06:46:43.650414 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c6htp" podStartSLOduration=2.711984526 podStartE2EDuration="6.650401419s" podCreationTimestamp="2026-01-23 06:46:37 +0000 UTC" firstStartedPulling="2026-01-23 06:46:38.437261717 +0000 UTC m=+798.241028370" lastFinishedPulling="2026-01-23 06:46:42.37567861 +0000 UTC m=+802.179445263" observedRunningTime="2026-01-23 06:46:42.627819583 +0000 UTC m=+802.431586236" watchObservedRunningTime="2026-01-23 06:46:43.650401419 +0000 UTC m=+803.454168112" Jan 23 06:46:43 crc kubenswrapper[4937]: I0123 06:46:43.671143 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-tsh7l" podStartSLOduration=2.424811731 podStartE2EDuration="6.671110128s" podCreationTimestamp="2026-01-23 06:46:37 +0000 UTC" firstStartedPulling="2026-01-23 06:46:38.068650468 +0000 UTC m=+797.872417121" lastFinishedPulling="2026-01-23 06:46:42.314948865 +0000 UTC m=+802.118715518" observedRunningTime="2026-01-23 06:46:43.666500136 +0000 UTC m=+803.470266859" watchObservedRunningTime="2026-01-23 06:46:43.671110128 +0000 UTC m=+803.474876821" Jan 23 06:46:45 crc kubenswrapper[4937]: I0123 06:46:45.666018 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cqtb7" Jan 23 06:46:45 crc kubenswrapper[4937]: I0123 06:46:45.719011 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cqtb7" Jan 23 06:46:45 crc kubenswrapper[4937]: I0123 06:46:45.899305 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cqtb7"] Jan 23 06:46:46 crc kubenswrapper[4937]: I0123 06:46:46.649338 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-s7f9b" event={"ID":"708f45a9-e78f-4bfd-8036-2ee32896def2","Type":"ContainerStarted","Data":"19e69e50b63d811d2b4804d1b0385898ca315cbff3fb1880ceab3abb7ca5cdbf"} Jan 23 06:46:46 crc kubenswrapper[4937]: I0123 06:46:46.673761 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-s7f9b" podStartSLOduration=2.079054469 podStartE2EDuration="9.673734463s" podCreationTimestamp="2026-01-23 06:46:37 +0000 UTC" firstStartedPulling="2026-01-23 06:46:38.252632949 +0000 UTC m=+798.056399602" lastFinishedPulling="2026-01-23 06:46:45.847312923 +0000 UTC m=+805.651079596" observedRunningTime="2026-01-23 06:46:46.673267771 +0000 UTC m=+806.477034464" watchObservedRunningTime="2026-01-23 06:46:46.673734463 +0000 UTC m=+806.477501156" Jan 23 06:46:47 crc kubenswrapper[4937]: I0123 06:46:47.660876 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cqtb7" podUID="2e11507b-08c7-451e-b2b1-5b5c1d7715f5" containerName="registry-server" containerID="cri-o://0019c8d32f11349b718c4187a2b6d702dd06603e782525b685063d6cee3cd243" gracePeriod=2 Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.079249 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-tsh7l" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.085350 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqtb7" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.167893 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e11507b-08c7-451e-b2b1-5b5c1d7715f5-utilities\") pod \"2e11507b-08c7-451e-b2b1-5b5c1d7715f5\" (UID: \"2e11507b-08c7-451e-b2b1-5b5c1d7715f5\") " Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.168401 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e11507b-08c7-451e-b2b1-5b5c1d7715f5-catalog-content\") pod \"2e11507b-08c7-451e-b2b1-5b5c1d7715f5\" (UID: \"2e11507b-08c7-451e-b2b1-5b5c1d7715f5\") " Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.168469 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk2lf\" (UniqueName: \"kubernetes.io/projected/2e11507b-08c7-451e-b2b1-5b5c1d7715f5-kube-api-access-dk2lf\") pod \"2e11507b-08c7-451e-b2b1-5b5c1d7715f5\" (UID: \"2e11507b-08c7-451e-b2b1-5b5c1d7715f5\") " Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.171464 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e11507b-08c7-451e-b2b1-5b5c1d7715f5-utilities" (OuterVolumeSpecName: "utilities") pod "2e11507b-08c7-451e-b2b1-5b5c1d7715f5" (UID: "2e11507b-08c7-451e-b2b1-5b5c1d7715f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.183777 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e11507b-08c7-451e-b2b1-5b5c1d7715f5-kube-api-access-dk2lf" (OuterVolumeSpecName: "kube-api-access-dk2lf") pod "2e11507b-08c7-451e-b2b1-5b5c1d7715f5" (UID: "2e11507b-08c7-451e-b2b1-5b5c1d7715f5"). InnerVolumeSpecName "kube-api-access-dk2lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.270242 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e11507b-08c7-451e-b2b1-5b5c1d7715f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.270288 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk2lf\" (UniqueName: \"kubernetes.io/projected/2e11507b-08c7-451e-b2b1-5b5c1d7715f5-kube-api-access-dk2lf\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.313213 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e11507b-08c7-451e-b2b1-5b5c1d7715f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e11507b-08c7-451e-b2b1-5b5c1d7715f5" (UID: "2e11507b-08c7-451e-b2b1-5b5c1d7715f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.372366 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e11507b-08c7-451e-b2b1-5b5c1d7715f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.386822 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.386884 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.394461 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.671783 4937 generic.go:334] "Generic (PLEG): container finished" podID="2e11507b-08c7-451e-b2b1-5b5c1d7715f5" containerID="0019c8d32f11349b718c4187a2b6d702dd06603e782525b685063d6cee3cd243" exitCode=0 Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.671857 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqtb7" event={"ID":"2e11507b-08c7-451e-b2b1-5b5c1d7715f5","Type":"ContainerDied","Data":"0019c8d32f11349b718c4187a2b6d702dd06603e782525b685063d6cee3cd243"} Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.672121 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqtb7" event={"ID":"2e11507b-08c7-451e-b2b1-5b5c1d7715f5","Type":"ContainerDied","Data":"20871b83e06b4504a6e4e0907e24c307c263a5717d25cb0fb6593e9fd3ec0260"} Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.672164 4937 scope.go:117] "RemoveContainer" containerID="0019c8d32f11349b718c4187a2b6d702dd06603e782525b685063d6cee3cd243" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.671889 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqtb7" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.679923 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7d69557dd4-2qwxv" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.714240 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cqtb7"] Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.714979 4937 scope.go:117] "RemoveContainer" containerID="65a8b91f95bcb6d05b28b1cb3f94e110f06255d580b775f02fb378d51aa15595" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.723557 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cqtb7"] Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.744354 4937 scope.go:117] "RemoveContainer" containerID="843d1b40f84b88e7863738d37549ef37bebe230cd62b02d374d0d84859dbd775" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.776581 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6n269"] Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.793815 4937 scope.go:117] "RemoveContainer" containerID="0019c8d32f11349b718c4187a2b6d702dd06603e782525b685063d6cee3cd243" Jan 23 06:46:48 crc kubenswrapper[4937]: E0123 06:46:48.794304 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0019c8d32f11349b718c4187a2b6d702dd06603e782525b685063d6cee3cd243\": container with ID starting with 0019c8d32f11349b718c4187a2b6d702dd06603e782525b685063d6cee3cd243 not found: ID does not exist" containerID="0019c8d32f11349b718c4187a2b6d702dd06603e782525b685063d6cee3cd243" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.794335 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0019c8d32f11349b718c4187a2b6d702dd06603e782525b685063d6cee3cd243"} err="failed to get container status \"0019c8d32f11349b718c4187a2b6d702dd06603e782525b685063d6cee3cd243\": rpc error: code = NotFound desc = could not find container \"0019c8d32f11349b718c4187a2b6d702dd06603e782525b685063d6cee3cd243\": container with ID starting with 0019c8d32f11349b718c4187a2b6d702dd06603e782525b685063d6cee3cd243 not found: ID does not exist" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.794356 4937 scope.go:117] "RemoveContainer" containerID="65a8b91f95bcb6d05b28b1cb3f94e110f06255d580b775f02fb378d51aa15595" Jan 23 06:46:48 crc kubenswrapper[4937]: E0123 06:46:48.794650 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65a8b91f95bcb6d05b28b1cb3f94e110f06255d580b775f02fb378d51aa15595\": container with ID starting with 65a8b91f95bcb6d05b28b1cb3f94e110f06255d580b775f02fb378d51aa15595 not found: ID does not exist" containerID="65a8b91f95bcb6d05b28b1cb3f94e110f06255d580b775f02fb378d51aa15595" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.794679 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65a8b91f95bcb6d05b28b1cb3f94e110f06255d580b775f02fb378d51aa15595"} err="failed to get container status \"65a8b91f95bcb6d05b28b1cb3f94e110f06255d580b775f02fb378d51aa15595\": rpc error: code = NotFound desc = could not find container \"65a8b91f95bcb6d05b28b1cb3f94e110f06255d580b775f02fb378d51aa15595\": container with ID starting with 65a8b91f95bcb6d05b28b1cb3f94e110f06255d580b775f02fb378d51aa15595 not found: ID does not exist" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.794697 4937 scope.go:117] "RemoveContainer" containerID="843d1b40f84b88e7863738d37549ef37bebe230cd62b02d374d0d84859dbd775" Jan 23 06:46:48 crc kubenswrapper[4937]: E0123 06:46:48.794999 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"843d1b40f84b88e7863738d37549ef37bebe230cd62b02d374d0d84859dbd775\": container with ID starting with 843d1b40f84b88e7863738d37549ef37bebe230cd62b02d374d0d84859dbd775 not found: ID does not exist" containerID="843d1b40f84b88e7863738d37549ef37bebe230cd62b02d374d0d84859dbd775" Jan 23 06:46:48 crc kubenswrapper[4937]: I0123 06:46:48.795040 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"843d1b40f84b88e7863738d37549ef37bebe230cd62b02d374d0d84859dbd775"} err="failed to get container status \"843d1b40f84b88e7863738d37549ef37bebe230cd62b02d374d0d84859dbd775\": rpc error: code = NotFound desc = could not find container \"843d1b40f84b88e7863738d37549ef37bebe230cd62b02d374d0d84859dbd775\": container with ID starting with 843d1b40f84b88e7863738d37549ef37bebe230cd62b02d374d0d84859dbd775 not found: ID does not exist" Jan 23 06:46:50 crc kubenswrapper[4937]: I0123 06:46:50.541338 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e11507b-08c7-451e-b2b1-5b5c1d7715f5" path="/var/lib/kubelet/pods/2e11507b-08c7-451e-b2b1-5b5c1d7715f5/volumes" Jan 23 06:46:58 crc kubenswrapper[4937]: I0123 06:46:58.023469 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c6htp" Jan 23 06:47:07 crc kubenswrapper[4937]: I0123 06:47:07.724050 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:47:07 crc kubenswrapper[4937]: I0123 06:47:07.725028 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:47:13 crc kubenswrapper[4937]: I0123 06:47:13.825237 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-6n269" podUID="a88c49d5-e615-4c41-972e-3a0ddcadfd53" containerName="console" containerID="cri-o://057840d57903a8c6cc87b59fff4bc90592a2f6c83ba1a3dc0eb70ea6db5e4722" gracePeriod=15 Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.200416 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6n269_a88c49d5-e615-4c41-972e-3a0ddcadfd53/console/0.log" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.200783 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.249769 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-console-config\") pod \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.249842 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a88c49d5-e615-4c41-972e-3a0ddcadfd53-console-serving-cert\") pod \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.249865 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-service-ca\") pod \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.249890 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhqjd\" (UniqueName: \"kubernetes.io/projected/a88c49d5-e615-4c41-972e-3a0ddcadfd53-kube-api-access-zhqjd\") pod \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.249914 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a88c49d5-e615-4c41-972e-3a0ddcadfd53-console-oauth-config\") pod \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.249944 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-trusted-ca-bundle\") pod \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.249991 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-oauth-serving-cert\") pod \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\" (UID: \"a88c49d5-e615-4c41-972e-3a0ddcadfd53\") " Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.250727 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-service-ca" (OuterVolumeSpecName: "service-ca") pod "a88c49d5-e615-4c41-972e-3a0ddcadfd53" (UID: "a88c49d5-e615-4c41-972e-3a0ddcadfd53"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.250767 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-console-config" (OuterVolumeSpecName: "console-config") pod "a88c49d5-e615-4c41-972e-3a0ddcadfd53" (UID: "a88c49d5-e615-4c41-972e-3a0ddcadfd53"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.251078 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a88c49d5-e615-4c41-972e-3a0ddcadfd53" (UID: "a88c49d5-e615-4c41-972e-3a0ddcadfd53"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.251567 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a88c49d5-e615-4c41-972e-3a0ddcadfd53" (UID: "a88c49d5-e615-4c41-972e-3a0ddcadfd53"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.255498 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a88c49d5-e615-4c41-972e-3a0ddcadfd53-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a88c49d5-e615-4c41-972e-3a0ddcadfd53" (UID: "a88c49d5-e615-4c41-972e-3a0ddcadfd53"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.259109 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a88c49d5-e615-4c41-972e-3a0ddcadfd53-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a88c49d5-e615-4c41-972e-3a0ddcadfd53" (UID: "a88c49d5-e615-4c41-972e-3a0ddcadfd53"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.260221 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a88c49d5-e615-4c41-972e-3a0ddcadfd53-kube-api-access-zhqjd" (OuterVolumeSpecName: "kube-api-access-zhqjd") pod "a88c49d5-e615-4c41-972e-3a0ddcadfd53" (UID: "a88c49d5-e615-4c41-972e-3a0ddcadfd53"). InnerVolumeSpecName "kube-api-access-zhqjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.351265 4937 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.351302 4937 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a88c49d5-e615-4c41-972e-3a0ddcadfd53-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.351313 4937 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.351326 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhqjd\" (UniqueName: \"kubernetes.io/projected/a88c49d5-e615-4c41-972e-3a0ddcadfd53-kube-api-access-zhqjd\") on node \"crc\" DevicePath \"\"" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.351341 4937 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a88c49d5-e615-4c41-972e-3a0ddcadfd53-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.351352 4937 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.351361 4937 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a88c49d5-e615-4c41-972e-3a0ddcadfd53-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.867656 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6n269_a88c49d5-e615-4c41-972e-3a0ddcadfd53/console/0.log" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.867992 4937 generic.go:334] "Generic (PLEG): container finished" podID="a88c49d5-e615-4c41-972e-3a0ddcadfd53" containerID="057840d57903a8c6cc87b59fff4bc90592a2f6c83ba1a3dc0eb70ea6db5e4722" exitCode=2 Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.868024 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6n269" event={"ID":"a88c49d5-e615-4c41-972e-3a0ddcadfd53","Type":"ContainerDied","Data":"057840d57903a8c6cc87b59fff4bc90592a2f6c83ba1a3dc0eb70ea6db5e4722"} Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.868063 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6n269" event={"ID":"a88c49d5-e615-4c41-972e-3a0ddcadfd53","Type":"ContainerDied","Data":"50783287e8f92574100d0401f3ac1731291064ca4bf5e0e399111ad8988a1065"} Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.868079 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6n269" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.868084 4937 scope.go:117] "RemoveContainer" containerID="057840d57903a8c6cc87b59fff4bc90592a2f6c83ba1a3dc0eb70ea6db5e4722" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.885775 4937 scope.go:117] "RemoveContainer" containerID="057840d57903a8c6cc87b59fff4bc90592a2f6c83ba1a3dc0eb70ea6db5e4722" Jan 23 06:47:14 crc kubenswrapper[4937]: E0123 06:47:14.886381 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"057840d57903a8c6cc87b59fff4bc90592a2f6c83ba1a3dc0eb70ea6db5e4722\": container with ID starting with 057840d57903a8c6cc87b59fff4bc90592a2f6c83ba1a3dc0eb70ea6db5e4722 not found: ID does not exist" containerID="057840d57903a8c6cc87b59fff4bc90592a2f6c83ba1a3dc0eb70ea6db5e4722" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.886433 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"057840d57903a8c6cc87b59fff4bc90592a2f6c83ba1a3dc0eb70ea6db5e4722"} err="failed to get container status \"057840d57903a8c6cc87b59fff4bc90592a2f6c83ba1a3dc0eb70ea6db5e4722\": rpc error: code = NotFound desc = could not find container \"057840d57903a8c6cc87b59fff4bc90592a2f6c83ba1a3dc0eb70ea6db5e4722\": container with ID starting with 057840d57903a8c6cc87b59fff4bc90592a2f6c83ba1a3dc0eb70ea6db5e4722 not found: ID does not exist" Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.889271 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6n269"] Jan 23 06:47:14 crc kubenswrapper[4937]: I0123 06:47:14.894851 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-6n269"] Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.357012 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8"] Jan 23 06:47:15 crc kubenswrapper[4937]: E0123 06:47:15.357233 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a88c49d5-e615-4c41-972e-3a0ddcadfd53" containerName="console" Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.357247 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="a88c49d5-e615-4c41-972e-3a0ddcadfd53" containerName="console" Jan 23 06:47:15 crc kubenswrapper[4937]: E0123 06:47:15.357263 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e11507b-08c7-451e-b2b1-5b5c1d7715f5" containerName="extract-content" Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.357270 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e11507b-08c7-451e-b2b1-5b5c1d7715f5" containerName="extract-content" Jan 23 06:47:15 crc kubenswrapper[4937]: E0123 06:47:15.357284 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e11507b-08c7-451e-b2b1-5b5c1d7715f5" containerName="extract-utilities" Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.357291 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e11507b-08c7-451e-b2b1-5b5c1d7715f5" containerName="extract-utilities" Jan 23 06:47:15 crc kubenswrapper[4937]: E0123 06:47:15.357306 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e11507b-08c7-451e-b2b1-5b5c1d7715f5" containerName="registry-server" Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.357315 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e11507b-08c7-451e-b2b1-5b5c1d7715f5" containerName="registry-server" Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.357440 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="a88c49d5-e615-4c41-972e-3a0ddcadfd53" containerName="console" Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.357451 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e11507b-08c7-451e-b2b1-5b5c1d7715f5" containerName="registry-server" Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.358432 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8" Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.361509 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.368078 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8"] Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.464430 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d939310b-dd64-4e1d-9b89-715b39b414cb-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8\" (UID: \"d939310b-dd64-4e1d-9b89-715b39b414cb\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8" Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.464622 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d939310b-dd64-4e1d-9b89-715b39b414cb-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8\" (UID: \"d939310b-dd64-4e1d-9b89-715b39b414cb\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8" Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.464697 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mznn\" (UniqueName: \"kubernetes.io/projected/d939310b-dd64-4e1d-9b89-715b39b414cb-kube-api-access-5mznn\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8\" (UID: \"d939310b-dd64-4e1d-9b89-715b39b414cb\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8" Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.566753 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d939310b-dd64-4e1d-9b89-715b39b414cb-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8\" (UID: \"d939310b-dd64-4e1d-9b89-715b39b414cb\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8" Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.566846 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mznn\" (UniqueName: \"kubernetes.io/projected/d939310b-dd64-4e1d-9b89-715b39b414cb-kube-api-access-5mznn\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8\" (UID: \"d939310b-dd64-4e1d-9b89-715b39b414cb\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8" Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.567054 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d939310b-dd64-4e1d-9b89-715b39b414cb-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8\" (UID: \"d939310b-dd64-4e1d-9b89-715b39b414cb\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8" Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.567359 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d939310b-dd64-4e1d-9b89-715b39b414cb-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8\" (UID: \"d939310b-dd64-4e1d-9b89-715b39b414cb\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8" Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.567899 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d939310b-dd64-4e1d-9b89-715b39b414cb-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8\" (UID: \"d939310b-dd64-4e1d-9b89-715b39b414cb\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8" Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.587001 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mznn\" (UniqueName: \"kubernetes.io/projected/d939310b-dd64-4e1d-9b89-715b39b414cb-kube-api-access-5mznn\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8\" (UID: \"d939310b-dd64-4e1d-9b89-715b39b414cb\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8" Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.673799 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8" Jan 23 06:47:15 crc kubenswrapper[4937]: I0123 06:47:15.895639 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8"] Jan 23 06:47:16 crc kubenswrapper[4937]: I0123 06:47:16.535364 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a88c49d5-e615-4c41-972e-3a0ddcadfd53" path="/var/lib/kubelet/pods/a88c49d5-e615-4c41-972e-3a0ddcadfd53/volumes" Jan 23 06:47:16 crc kubenswrapper[4937]: I0123 06:47:16.886542 4937 generic.go:334] "Generic (PLEG): container finished" podID="d939310b-dd64-4e1d-9b89-715b39b414cb" containerID="37ee6bd55fdbf275ec15082d7521dd3b23579dc8de0cf2cc71a9b710c22f075d" exitCode=0 Jan 23 06:47:16 crc kubenswrapper[4937]: I0123 06:47:16.886741 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8" event={"ID":"d939310b-dd64-4e1d-9b89-715b39b414cb","Type":"ContainerDied","Data":"37ee6bd55fdbf275ec15082d7521dd3b23579dc8de0cf2cc71a9b710c22f075d"} Jan 23 06:47:16 crc kubenswrapper[4937]: I0123 06:47:16.886836 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8" event={"ID":"d939310b-dd64-4e1d-9b89-715b39b414cb","Type":"ContainerStarted","Data":"06975967ab87abf7cd3777f2250ae6325adbc0d8b8e94037b02be8ea8ccd1c54"} Jan 23 06:47:19 crc kubenswrapper[4937]: I0123 06:47:19.911518 4937 generic.go:334] "Generic (PLEG): container finished" podID="d939310b-dd64-4e1d-9b89-715b39b414cb" containerID="bcf0210c8f8928517648222a48f47b7b57529a40e2b26f1e3ed8bed54d0f091e" exitCode=0 Jan 23 06:47:19 crc kubenswrapper[4937]: I0123 06:47:19.911571 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8" event={"ID":"d939310b-dd64-4e1d-9b89-715b39b414cb","Type":"ContainerDied","Data":"bcf0210c8f8928517648222a48f47b7b57529a40e2b26f1e3ed8bed54d0f091e"} Jan 23 06:47:20 crc kubenswrapper[4937]: I0123 06:47:20.920066 4937 generic.go:334] "Generic (PLEG): container finished" podID="d939310b-dd64-4e1d-9b89-715b39b414cb" containerID="da820cf1dbf5a3a4d284d6628960dfaab0b79a8e122d2dc7ae1c91230236e313" exitCode=0 Jan 23 06:47:20 crc kubenswrapper[4937]: I0123 06:47:20.920181 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8" event={"ID":"d939310b-dd64-4e1d-9b89-715b39b414cb","Type":"ContainerDied","Data":"da820cf1dbf5a3a4d284d6628960dfaab0b79a8e122d2dc7ae1c91230236e313"} Jan 23 06:47:22 crc kubenswrapper[4937]: I0123 06:47:22.246720 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8" Jan 23 06:47:22 crc kubenswrapper[4937]: I0123 06:47:22.276777 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mznn\" (UniqueName: \"kubernetes.io/projected/d939310b-dd64-4e1d-9b89-715b39b414cb-kube-api-access-5mznn\") pod \"d939310b-dd64-4e1d-9b89-715b39b414cb\" (UID: \"d939310b-dd64-4e1d-9b89-715b39b414cb\") " Jan 23 06:47:22 crc kubenswrapper[4937]: I0123 06:47:22.276904 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d939310b-dd64-4e1d-9b89-715b39b414cb-util\") pod \"d939310b-dd64-4e1d-9b89-715b39b414cb\" (UID: \"d939310b-dd64-4e1d-9b89-715b39b414cb\") " Jan 23 06:47:22 crc kubenswrapper[4937]: I0123 06:47:22.276976 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d939310b-dd64-4e1d-9b89-715b39b414cb-bundle\") pod \"d939310b-dd64-4e1d-9b89-715b39b414cb\" (UID: \"d939310b-dd64-4e1d-9b89-715b39b414cb\") " Jan 23 06:47:22 crc kubenswrapper[4937]: I0123 06:47:22.277928 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d939310b-dd64-4e1d-9b89-715b39b414cb-bundle" (OuterVolumeSpecName: "bundle") pod "d939310b-dd64-4e1d-9b89-715b39b414cb" (UID: "d939310b-dd64-4e1d-9b89-715b39b414cb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:47:22 crc kubenswrapper[4937]: I0123 06:47:22.307438 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d939310b-dd64-4e1d-9b89-715b39b414cb-kube-api-access-5mznn" (OuterVolumeSpecName: "kube-api-access-5mznn") pod "d939310b-dd64-4e1d-9b89-715b39b414cb" (UID: "d939310b-dd64-4e1d-9b89-715b39b414cb"). InnerVolumeSpecName "kube-api-access-5mznn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:47:22 crc kubenswrapper[4937]: I0123 06:47:22.338885 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d939310b-dd64-4e1d-9b89-715b39b414cb-util" (OuterVolumeSpecName: "util") pod "d939310b-dd64-4e1d-9b89-715b39b414cb" (UID: "d939310b-dd64-4e1d-9b89-715b39b414cb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:47:22 crc kubenswrapper[4937]: I0123 06:47:22.378543 4937 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d939310b-dd64-4e1d-9b89-715b39b414cb-util\") on node \"crc\" DevicePath \"\"" Jan 23 06:47:22 crc kubenswrapper[4937]: I0123 06:47:22.378730 4937 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d939310b-dd64-4e1d-9b89-715b39b414cb-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:47:22 crc kubenswrapper[4937]: I0123 06:47:22.378753 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mznn\" (UniqueName: \"kubernetes.io/projected/d939310b-dd64-4e1d-9b89-715b39b414cb-kube-api-access-5mznn\") on node \"crc\" DevicePath \"\"" Jan 23 06:47:22 crc kubenswrapper[4937]: I0123 06:47:22.940894 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8" event={"ID":"d939310b-dd64-4e1d-9b89-715b39b414cb","Type":"ContainerDied","Data":"06975967ab87abf7cd3777f2250ae6325adbc0d8b8e94037b02be8ea8ccd1c54"} Jan 23 06:47:22 crc kubenswrapper[4937]: I0123 06:47:22.940968 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06975967ab87abf7cd3777f2250ae6325adbc0d8b8e94037b02be8ea8ccd1c54" Jan 23 06:47:22 crc kubenswrapper[4937]: I0123 06:47:22.940990 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.569822 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6879d9f67d-9c95s"] Jan 23 06:47:33 crc kubenswrapper[4937]: E0123 06:47:33.570421 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d939310b-dd64-4e1d-9b89-715b39b414cb" containerName="extract" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.570434 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d939310b-dd64-4e1d-9b89-715b39b414cb" containerName="extract" Jan 23 06:47:33 crc kubenswrapper[4937]: E0123 06:47:33.570445 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d939310b-dd64-4e1d-9b89-715b39b414cb" containerName="pull" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.570450 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d939310b-dd64-4e1d-9b89-715b39b414cb" containerName="pull" Jan 23 06:47:33 crc kubenswrapper[4937]: E0123 06:47:33.570464 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d939310b-dd64-4e1d-9b89-715b39b414cb" containerName="util" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.570470 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d939310b-dd64-4e1d-9b89-715b39b414cb" containerName="util" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.570555 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="d939310b-dd64-4e1d-9b89-715b39b414cb" containerName="extract" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.570931 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6879d9f67d-9c95s" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.572482 4937 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.572552 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.573076 4937 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-9vm7n" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.573129 4937 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.573476 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.585939 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6879d9f67d-9c95s"] Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.742126 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/681d1d3c-b77f-4662-b1c9-6958e568becb-apiservice-cert\") pod \"metallb-operator-controller-manager-6879d9f67d-9c95s\" (UID: \"681d1d3c-b77f-4662-b1c9-6958e568becb\") " pod="metallb-system/metallb-operator-controller-manager-6879d9f67d-9c95s" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.742197 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/681d1d3c-b77f-4662-b1c9-6958e568becb-webhook-cert\") pod \"metallb-operator-controller-manager-6879d9f67d-9c95s\" (UID: \"681d1d3c-b77f-4662-b1c9-6958e568becb\") " pod="metallb-system/metallb-operator-controller-manager-6879d9f67d-9c95s" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.742338 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj472\" (UniqueName: \"kubernetes.io/projected/681d1d3c-b77f-4662-b1c9-6958e568becb-kube-api-access-dj472\") pod \"metallb-operator-controller-manager-6879d9f67d-9c95s\" (UID: \"681d1d3c-b77f-4662-b1c9-6958e568becb\") " pod="metallb-system/metallb-operator-controller-manager-6879d9f67d-9c95s" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.843945 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/681d1d3c-b77f-4662-b1c9-6958e568becb-apiservice-cert\") pod \"metallb-operator-controller-manager-6879d9f67d-9c95s\" (UID: \"681d1d3c-b77f-4662-b1c9-6958e568becb\") " pod="metallb-system/metallb-operator-controller-manager-6879d9f67d-9c95s" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.844003 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/681d1d3c-b77f-4662-b1c9-6958e568becb-webhook-cert\") pod \"metallb-operator-controller-manager-6879d9f67d-9c95s\" (UID: \"681d1d3c-b77f-4662-b1c9-6958e568becb\") " pod="metallb-system/metallb-operator-controller-manager-6879d9f67d-9c95s" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.844037 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj472\" (UniqueName: \"kubernetes.io/projected/681d1d3c-b77f-4662-b1c9-6958e568becb-kube-api-access-dj472\") pod \"metallb-operator-controller-manager-6879d9f67d-9c95s\" (UID: \"681d1d3c-b77f-4662-b1c9-6958e568becb\") " pod="metallb-system/metallb-operator-controller-manager-6879d9f67d-9c95s" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.849278 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/681d1d3c-b77f-4662-b1c9-6958e568becb-apiservice-cert\") pod \"metallb-operator-controller-manager-6879d9f67d-9c95s\" (UID: \"681d1d3c-b77f-4662-b1c9-6958e568becb\") " pod="metallb-system/metallb-operator-controller-manager-6879d9f67d-9c95s" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.854626 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/681d1d3c-b77f-4662-b1c9-6958e568becb-webhook-cert\") pod \"metallb-operator-controller-manager-6879d9f67d-9c95s\" (UID: \"681d1d3c-b77f-4662-b1c9-6958e568becb\") " pod="metallb-system/metallb-operator-controller-manager-6879d9f67d-9c95s" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.871868 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj472\" (UniqueName: \"kubernetes.io/projected/681d1d3c-b77f-4662-b1c9-6958e568becb-kube-api-access-dj472\") pod \"metallb-operator-controller-manager-6879d9f67d-9c95s\" (UID: \"681d1d3c-b77f-4662-b1c9-6958e568becb\") " pod="metallb-system/metallb-operator-controller-manager-6879d9f67d-9c95s" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.932941 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6879d9f67d-9c95s" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.954831 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-85d9998d95-rnvn8"] Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.955845 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-85d9998d95-rnvn8" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.957928 4937 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.958060 4937 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-cdjzv" Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.958617 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-85d9998d95-rnvn8"] Jan 23 06:47:33 crc kubenswrapper[4937]: I0123 06:47:33.959015 4937 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 23 06:47:34 crc kubenswrapper[4937]: I0123 06:47:34.150535 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7712630-dcf6-4b65-b525-bb63a735a0aa-apiservice-cert\") pod \"metallb-operator-webhook-server-85d9998d95-rnvn8\" (UID: \"e7712630-dcf6-4b65-b525-bb63a735a0aa\") " pod="metallb-system/metallb-operator-webhook-server-85d9998d95-rnvn8" Jan 23 06:47:34 crc kubenswrapper[4937]: I0123 06:47:34.150887 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7712630-dcf6-4b65-b525-bb63a735a0aa-webhook-cert\") pod \"metallb-operator-webhook-server-85d9998d95-rnvn8\" (UID: \"e7712630-dcf6-4b65-b525-bb63a735a0aa\") " pod="metallb-system/metallb-operator-webhook-server-85d9998d95-rnvn8" Jan 23 06:47:34 crc kubenswrapper[4937]: I0123 06:47:34.150963 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrtvz\" (UniqueName: \"kubernetes.io/projected/e7712630-dcf6-4b65-b525-bb63a735a0aa-kube-api-access-vrtvz\") pod \"metallb-operator-webhook-server-85d9998d95-rnvn8\" (UID: \"e7712630-dcf6-4b65-b525-bb63a735a0aa\") " pod="metallb-system/metallb-operator-webhook-server-85d9998d95-rnvn8" Jan 23 06:47:34 crc kubenswrapper[4937]: I0123 06:47:34.193529 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6879d9f67d-9c95s"] Jan 23 06:47:34 crc kubenswrapper[4937]: W0123 06:47:34.207325 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod681d1d3c_b77f_4662_b1c9_6958e568becb.slice/crio-cec57f24a0c81f79e2719b69e1573faa8b7e7f9c8404b82f400fc8d3a2303e5b WatchSource:0}: Error finding container cec57f24a0c81f79e2719b69e1573faa8b7e7f9c8404b82f400fc8d3a2303e5b: Status 404 returned error can't find the container with id cec57f24a0c81f79e2719b69e1573faa8b7e7f9c8404b82f400fc8d3a2303e5b Jan 23 06:47:34 crc kubenswrapper[4937]: I0123 06:47:34.252281 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7712630-dcf6-4b65-b525-bb63a735a0aa-webhook-cert\") pod \"metallb-operator-webhook-server-85d9998d95-rnvn8\" (UID: \"e7712630-dcf6-4b65-b525-bb63a735a0aa\") " pod="metallb-system/metallb-operator-webhook-server-85d9998d95-rnvn8" Jan 23 06:47:34 crc kubenswrapper[4937]: I0123 06:47:34.252332 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrtvz\" (UniqueName: \"kubernetes.io/projected/e7712630-dcf6-4b65-b525-bb63a735a0aa-kube-api-access-vrtvz\") pod \"metallb-operator-webhook-server-85d9998d95-rnvn8\" (UID: \"e7712630-dcf6-4b65-b525-bb63a735a0aa\") " pod="metallb-system/metallb-operator-webhook-server-85d9998d95-rnvn8" Jan 23 06:47:34 crc kubenswrapper[4937]: I0123 06:47:34.252407 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7712630-dcf6-4b65-b525-bb63a735a0aa-apiservice-cert\") pod \"metallb-operator-webhook-server-85d9998d95-rnvn8\" (UID: \"e7712630-dcf6-4b65-b525-bb63a735a0aa\") " pod="metallb-system/metallb-operator-webhook-server-85d9998d95-rnvn8" Jan 23 06:47:34 crc kubenswrapper[4937]: I0123 06:47:34.258546 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7712630-dcf6-4b65-b525-bb63a735a0aa-apiservice-cert\") pod \"metallb-operator-webhook-server-85d9998d95-rnvn8\" (UID: \"e7712630-dcf6-4b65-b525-bb63a735a0aa\") " pod="metallb-system/metallb-operator-webhook-server-85d9998d95-rnvn8" Jan 23 06:47:34 crc kubenswrapper[4937]: I0123 06:47:34.258916 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7712630-dcf6-4b65-b525-bb63a735a0aa-webhook-cert\") pod \"metallb-operator-webhook-server-85d9998d95-rnvn8\" (UID: \"e7712630-dcf6-4b65-b525-bb63a735a0aa\") " pod="metallb-system/metallb-operator-webhook-server-85d9998d95-rnvn8" Jan 23 06:47:34 crc kubenswrapper[4937]: I0123 06:47:34.268937 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrtvz\" (UniqueName: \"kubernetes.io/projected/e7712630-dcf6-4b65-b525-bb63a735a0aa-kube-api-access-vrtvz\") pod \"metallb-operator-webhook-server-85d9998d95-rnvn8\" (UID: \"e7712630-dcf6-4b65-b525-bb63a735a0aa\") " pod="metallb-system/metallb-operator-webhook-server-85d9998d95-rnvn8" Jan 23 06:47:34 crc kubenswrapper[4937]: I0123 06:47:34.305501 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-85d9998d95-rnvn8" Jan 23 06:47:34 crc kubenswrapper[4937]: I0123 06:47:34.556560 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-85d9998d95-rnvn8"] Jan 23 06:47:35 crc kubenswrapper[4937]: I0123 06:47:35.016465 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-85d9998d95-rnvn8" event={"ID":"e7712630-dcf6-4b65-b525-bb63a735a0aa","Type":"ContainerStarted","Data":"eda0324a8d447bf56f8e498b45d6efb8150db8c73b119bde0b5f520ce56fb414"} Jan 23 06:47:35 crc kubenswrapper[4937]: I0123 06:47:35.017762 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6879d9f67d-9c95s" event={"ID":"681d1d3c-b77f-4662-b1c9-6958e568becb","Type":"ContainerStarted","Data":"cec57f24a0c81f79e2719b69e1573faa8b7e7f9c8404b82f400fc8d3a2303e5b"} Jan 23 06:47:37 crc kubenswrapper[4937]: I0123 06:47:37.724388 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:47:37 crc kubenswrapper[4937]: I0123 06:47:37.724811 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:47:37 crc kubenswrapper[4937]: I0123 06:47:37.724854 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:47:37 crc kubenswrapper[4937]: I0123 06:47:37.726386 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"998b72db21c32d780c7acaf4315ccfdf0bdabc15b6ed9c145723fd20240726ab"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 06:47:37 crc kubenswrapper[4937]: I0123 06:47:37.726445 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://998b72db21c32d780c7acaf4315ccfdf0bdabc15b6ed9c145723fd20240726ab" gracePeriod=600 Jan 23 06:47:38 crc kubenswrapper[4937]: I0123 06:47:38.037026 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6879d9f67d-9c95s" event={"ID":"681d1d3c-b77f-4662-b1c9-6958e568becb","Type":"ContainerStarted","Data":"dc66119546d49e08b7ee9a0f8d43028427243bb0c2bf031fe1645fb4d6ce9ff1"} Jan 23 06:47:38 crc kubenswrapper[4937]: I0123 06:47:38.037210 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6879d9f67d-9c95s" Jan 23 06:47:38 crc kubenswrapper[4937]: I0123 06:47:38.039668 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="998b72db21c32d780c7acaf4315ccfdf0bdabc15b6ed9c145723fd20240726ab" exitCode=0 Jan 23 06:47:38 crc kubenswrapper[4937]: I0123 06:47:38.039704 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"998b72db21c32d780c7acaf4315ccfdf0bdabc15b6ed9c145723fd20240726ab"} Jan 23 06:47:38 crc kubenswrapper[4937]: I0123 06:47:38.039726 4937 scope.go:117] "RemoveContainer" containerID="7d3631879897790e7de1452e56edfc15429f364a23b8cbc48fd77e98ca63206e" Jan 23 06:47:40 crc kubenswrapper[4937]: I0123 06:47:40.059954 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"43224d24fa6475b6c1d236391f3764ff43f9af8ed79931ba5cc6fab271c5f9c7"} Jan 23 06:47:40 crc kubenswrapper[4937]: I0123 06:47:40.061584 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-85d9998d95-rnvn8" event={"ID":"e7712630-dcf6-4b65-b525-bb63a735a0aa","Type":"ContainerStarted","Data":"28e40e7bd86602175d1d77705aa358952e58566ceb5eb45734b4d896f89be176"} Jan 23 06:47:40 crc kubenswrapper[4937]: I0123 06:47:40.061778 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-85d9998d95-rnvn8" Jan 23 06:47:40 crc kubenswrapper[4937]: I0123 06:47:40.088238 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6879d9f67d-9c95s" podStartSLOduration=3.700676677 podStartE2EDuration="7.088216389s" podCreationTimestamp="2026-01-23 06:47:33 +0000 UTC" firstStartedPulling="2026-01-23 06:47:34.210254145 +0000 UTC m=+854.014020808" lastFinishedPulling="2026-01-23 06:47:37.597793877 +0000 UTC m=+857.401560520" observedRunningTime="2026-01-23 06:47:38.066049028 +0000 UTC m=+857.869815701" watchObservedRunningTime="2026-01-23 06:47:40.088216389 +0000 UTC m=+859.891983042" Jan 23 06:47:40 crc kubenswrapper[4937]: I0123 06:47:40.107769 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-85d9998d95-rnvn8" podStartSLOduration=2.314632707 podStartE2EDuration="7.107751185s" podCreationTimestamp="2026-01-23 06:47:33 +0000 UTC" firstStartedPulling="2026-01-23 06:47:34.541648737 +0000 UTC m=+854.345415390" lastFinishedPulling="2026-01-23 06:47:39.334767215 +0000 UTC m=+859.138533868" observedRunningTime="2026-01-23 06:47:40.102504861 +0000 UTC m=+859.906271514" watchObservedRunningTime="2026-01-23 06:47:40.107751185 +0000 UTC m=+859.911517838" Jan 23 06:47:54 crc kubenswrapper[4937]: I0123 06:47:54.309652 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-85d9998d95-rnvn8" Jan 23 06:48:13 crc kubenswrapper[4937]: I0123 06:48:13.935252 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6879d9f67d-9c95s" Jan 23 06:48:14 crc kubenswrapper[4937]: I0123 06:48:14.887727 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-p8z6s"] Jan 23 06:48:14 crc kubenswrapper[4937]: I0123 06:48:14.889341 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p8z6s" Jan 23 06:48:14 crc kubenswrapper[4937]: I0123 06:48:14.891562 4937 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-t4str" Jan 23 06:48:14 crc kubenswrapper[4937]: I0123 06:48:14.893100 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-cpm64"] Jan 23 06:48:14 crc kubenswrapper[4937]: I0123 06:48:14.893643 4937 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 23 06:48:14 crc kubenswrapper[4937]: I0123 06:48:14.899886 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:14 crc kubenswrapper[4937]: I0123 06:48:14.902219 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 23 06:48:14 crc kubenswrapper[4937]: I0123 06:48:14.902653 4937 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 23 06:48:14 crc kubenswrapper[4937]: I0123 06:48:14.930874 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-p8z6s"] Jan 23 06:48:14 crc kubenswrapper[4937]: I0123 06:48:14.974463 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-gs9fg"] Jan 23 06:48:14 crc kubenswrapper[4937]: I0123 06:48:14.979082 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gs9fg" Jan 23 06:48:14 crc kubenswrapper[4937]: I0123 06:48:14.981122 4937 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 23 06:48:14 crc kubenswrapper[4937]: I0123 06:48:14.981122 4937 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 23 06:48:14 crc kubenswrapper[4937]: I0123 06:48:14.981307 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 23 06:48:14 crc kubenswrapper[4937]: I0123 06:48:14.982978 4937 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-75z9x" Jan 23 06:48:14 crc kubenswrapper[4937]: I0123 06:48:14.989929 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-stcfs"] Jan 23 06:48:14 crc kubenswrapper[4937]: I0123 06:48:14.991580 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-stcfs" Jan 23 06:48:14 crc kubenswrapper[4937]: I0123 06:48:14.995560 4937 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.024247 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-stcfs"] Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.079217 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b19c38c9-ee52-4326-8101-2148ef37acfc-metrics\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.079256 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b19c38c9-ee52-4326-8101-2148ef37acfc-frr-sockets\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.079292 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsztk\" (UniqueName: \"kubernetes.io/projected/b19c38c9-ee52-4326-8101-2148ef37acfc-kube-api-access-lsztk\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.079315 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b19c38c9-ee52-4326-8101-2148ef37acfc-metrics-certs\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.079330 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b19c38c9-ee52-4326-8101-2148ef37acfc-frr-startup\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.079349 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krx25\" (UniqueName: \"kubernetes.io/projected/07fe67ed-5566-4458-9379-9440d8085315-kube-api-access-krx25\") pod \"frr-k8s-webhook-server-7df86c4f6c-p8z6s\" (UID: \"07fe67ed-5566-4458-9379-9440d8085315\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p8z6s" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.079366 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07fe67ed-5566-4458-9379-9440d8085315-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-p8z6s\" (UID: \"07fe67ed-5566-4458-9379-9440d8085315\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p8z6s" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.079382 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b19c38c9-ee52-4326-8101-2148ef37acfc-frr-conf\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.079397 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b19c38c9-ee52-4326-8101-2148ef37acfc-reloader\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.180394 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krx25\" (UniqueName: \"kubernetes.io/projected/07fe67ed-5566-4458-9379-9440d8085315-kube-api-access-krx25\") pod \"frr-k8s-webhook-server-7df86c4f6c-p8z6s\" (UID: \"07fe67ed-5566-4458-9379-9440d8085315\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p8z6s" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.180446 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsswz\" (UniqueName: \"kubernetes.io/projected/f1efc753-824c-42de-9e52-198864fee8e6-kube-api-access-bsswz\") pod \"controller-6968d8fdc4-stcfs\" (UID: \"f1efc753-824c-42de-9e52-198864fee8e6\") " pod="metallb-system/controller-6968d8fdc4-stcfs" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.180473 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07fe67ed-5566-4458-9379-9440d8085315-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-p8z6s\" (UID: \"07fe67ed-5566-4458-9379-9440d8085315\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p8z6s" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.180498 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b19c38c9-ee52-4326-8101-2148ef37acfc-frr-conf\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.180520 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b19c38c9-ee52-4326-8101-2148ef37acfc-reloader\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.180546 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1efc753-824c-42de-9e52-198864fee8e6-metrics-certs\") pod \"controller-6968d8fdc4-stcfs\" (UID: \"f1efc753-824c-42de-9e52-198864fee8e6\") " pod="metallb-system/controller-6968d8fdc4-stcfs" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.180582 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bfdb6d3-4624-4365-8566-c81e229271da-metrics-certs\") pod \"speaker-gs9fg\" (UID: \"6bfdb6d3-4624-4365-8566-c81e229271da\") " pod="metallb-system/speaker-gs9fg" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.180640 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc2r2\" (UniqueName: \"kubernetes.io/projected/6bfdb6d3-4624-4365-8566-c81e229271da-kube-api-access-wc2r2\") pod \"speaker-gs9fg\" (UID: \"6bfdb6d3-4624-4365-8566-c81e229271da\") " pod="metallb-system/speaker-gs9fg" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.180667 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1efc753-824c-42de-9e52-198864fee8e6-cert\") pod \"controller-6968d8fdc4-stcfs\" (UID: \"f1efc753-824c-42de-9e52-198864fee8e6\") " pod="metallb-system/controller-6968d8fdc4-stcfs" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.180696 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b19c38c9-ee52-4326-8101-2148ef37acfc-metrics\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.180718 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b19c38c9-ee52-4326-8101-2148ef37acfc-frr-sockets\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.180750 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6bfdb6d3-4624-4365-8566-c81e229271da-memberlist\") pod \"speaker-gs9fg\" (UID: \"6bfdb6d3-4624-4365-8566-c81e229271da\") " pod="metallb-system/speaker-gs9fg" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.180789 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsztk\" (UniqueName: \"kubernetes.io/projected/b19c38c9-ee52-4326-8101-2148ef37acfc-kube-api-access-lsztk\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.180817 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6bfdb6d3-4624-4365-8566-c81e229271da-metallb-excludel2\") pod \"speaker-gs9fg\" (UID: \"6bfdb6d3-4624-4365-8566-c81e229271da\") " pod="metallb-system/speaker-gs9fg" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.180840 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b19c38c9-ee52-4326-8101-2148ef37acfc-metrics-certs\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.180861 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b19c38c9-ee52-4326-8101-2148ef37acfc-frr-startup\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.182117 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b19c38c9-ee52-4326-8101-2148ef37acfc-frr-startup\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.182733 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b19c38c9-ee52-4326-8101-2148ef37acfc-metrics\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.182906 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b19c38c9-ee52-4326-8101-2148ef37acfc-frr-conf\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.183108 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b19c38c9-ee52-4326-8101-2148ef37acfc-frr-sockets\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.183406 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b19c38c9-ee52-4326-8101-2148ef37acfc-reloader\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.188463 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b19c38c9-ee52-4326-8101-2148ef37acfc-metrics-certs\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.189088 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/07fe67ed-5566-4458-9379-9440d8085315-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-p8z6s\" (UID: \"07fe67ed-5566-4458-9379-9440d8085315\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p8z6s" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.207863 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsztk\" (UniqueName: \"kubernetes.io/projected/b19c38c9-ee52-4326-8101-2148ef37acfc-kube-api-access-lsztk\") pod \"frr-k8s-cpm64\" (UID: \"b19c38c9-ee52-4326-8101-2148ef37acfc\") " pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.210539 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krx25\" (UniqueName: \"kubernetes.io/projected/07fe67ed-5566-4458-9379-9440d8085315-kube-api-access-krx25\") pod \"frr-k8s-webhook-server-7df86c4f6c-p8z6s\" (UID: \"07fe67ed-5566-4458-9379-9440d8085315\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p8z6s" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.270917 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p8z6s" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.281909 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6bfdb6d3-4624-4365-8566-c81e229271da-metallb-excludel2\") pod \"speaker-gs9fg\" (UID: \"6bfdb6d3-4624-4365-8566-c81e229271da\") " pod="metallb-system/speaker-gs9fg" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.281967 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsswz\" (UniqueName: \"kubernetes.io/projected/f1efc753-824c-42de-9e52-198864fee8e6-kube-api-access-bsswz\") pod \"controller-6968d8fdc4-stcfs\" (UID: \"f1efc753-824c-42de-9e52-198864fee8e6\") " pod="metallb-system/controller-6968d8fdc4-stcfs" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.281998 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1efc753-824c-42de-9e52-198864fee8e6-metrics-certs\") pod \"controller-6968d8fdc4-stcfs\" (UID: \"f1efc753-824c-42de-9e52-198864fee8e6\") " pod="metallb-system/controller-6968d8fdc4-stcfs" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.282029 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bfdb6d3-4624-4365-8566-c81e229271da-metrics-certs\") pod \"speaker-gs9fg\" (UID: \"6bfdb6d3-4624-4365-8566-c81e229271da\") " pod="metallb-system/speaker-gs9fg" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.282068 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc2r2\" (UniqueName: \"kubernetes.io/projected/6bfdb6d3-4624-4365-8566-c81e229271da-kube-api-access-wc2r2\") pod \"speaker-gs9fg\" (UID: \"6bfdb6d3-4624-4365-8566-c81e229271da\") " pod="metallb-system/speaker-gs9fg" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.282097 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1efc753-824c-42de-9e52-198864fee8e6-cert\") pod \"controller-6968d8fdc4-stcfs\" (UID: \"f1efc753-824c-42de-9e52-198864fee8e6\") " pod="metallb-system/controller-6968d8fdc4-stcfs" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.282137 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6bfdb6d3-4624-4365-8566-c81e229271da-memberlist\") pod \"speaker-gs9fg\" (UID: \"6bfdb6d3-4624-4365-8566-c81e229271da\") " pod="metallb-system/speaker-gs9fg" Jan 23 06:48:15 crc kubenswrapper[4937]: E0123 06:48:15.282260 4937 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 06:48:15 crc kubenswrapper[4937]: E0123 06:48:15.282317 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bfdb6d3-4624-4365-8566-c81e229271da-memberlist podName:6bfdb6d3-4624-4365-8566-c81e229271da nodeName:}" failed. No retries permitted until 2026-01-23 06:48:15.782296811 +0000 UTC m=+895.586063464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6bfdb6d3-4624-4365-8566-c81e229271da-memberlist") pod "speaker-gs9fg" (UID: "6bfdb6d3-4624-4365-8566-c81e229271da") : secret "metallb-memberlist" not found Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.283223 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6bfdb6d3-4624-4365-8566-c81e229271da-metallb-excludel2\") pod \"speaker-gs9fg\" (UID: \"6bfdb6d3-4624-4365-8566-c81e229271da\") " pod="metallb-system/speaker-gs9fg" Jan 23 06:48:15 crc kubenswrapper[4937]: E0123 06:48:15.284006 4937 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 23 06:48:15 crc kubenswrapper[4937]: E0123 06:48:15.284058 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bfdb6d3-4624-4365-8566-c81e229271da-metrics-certs podName:6bfdb6d3-4624-4365-8566-c81e229271da nodeName:}" failed. No retries permitted until 2026-01-23 06:48:15.784043729 +0000 UTC m=+895.587810382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6bfdb6d3-4624-4365-8566-c81e229271da-metrics-certs") pod "speaker-gs9fg" (UID: "6bfdb6d3-4624-4365-8566-c81e229271da") : secret "speaker-certs-secret" not found Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.285816 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.290138 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1efc753-824c-42de-9e52-198864fee8e6-cert\") pod \"controller-6968d8fdc4-stcfs\" (UID: \"f1efc753-824c-42de-9e52-198864fee8e6\") " pod="metallb-system/controller-6968d8fdc4-stcfs" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.290606 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1efc753-824c-42de-9e52-198864fee8e6-metrics-certs\") pod \"controller-6968d8fdc4-stcfs\" (UID: \"f1efc753-824c-42de-9e52-198864fee8e6\") " pod="metallb-system/controller-6968d8fdc4-stcfs" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.300190 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc2r2\" (UniqueName: \"kubernetes.io/projected/6bfdb6d3-4624-4365-8566-c81e229271da-kube-api-access-wc2r2\") pod \"speaker-gs9fg\" (UID: \"6bfdb6d3-4624-4365-8566-c81e229271da\") " pod="metallb-system/speaker-gs9fg" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.310672 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsswz\" (UniqueName: \"kubernetes.io/projected/f1efc753-824c-42de-9e52-198864fee8e6-kube-api-access-bsswz\") pod \"controller-6968d8fdc4-stcfs\" (UID: \"f1efc753-824c-42de-9e52-198864fee8e6\") " pod="metallb-system/controller-6968d8fdc4-stcfs" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.315045 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-stcfs" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.705063 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-p8z6s"] Jan 23 06:48:15 crc kubenswrapper[4937]: W0123 06:48:15.712886 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07fe67ed_5566_4458_9379_9440d8085315.slice/crio-7e326714fefaa1b4c5b54746ddd3f6319efc5de054ba7b4945c15b899a0c43c2 WatchSource:0}: Error finding container 7e326714fefaa1b4c5b54746ddd3f6319efc5de054ba7b4945c15b899a0c43c2: Status 404 returned error can't find the container with id 7e326714fefaa1b4c5b54746ddd3f6319efc5de054ba7b4945c15b899a0c43c2 Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.754944 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-stcfs"] Jan 23 06:48:15 crc kubenswrapper[4937]: W0123 06:48:15.762961 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1efc753_824c_42de_9e52_198864fee8e6.slice/crio-891125d578f5715ddbde818a571ace4830c9839a908e013114e233e9af1e2fb2 WatchSource:0}: Error finding container 891125d578f5715ddbde818a571ace4830c9839a908e013114e233e9af1e2fb2: Status 404 returned error can't find the container with id 891125d578f5715ddbde818a571ace4830c9839a908e013114e233e9af1e2fb2 Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.788569 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6bfdb6d3-4624-4365-8566-c81e229271da-memberlist\") pod \"speaker-gs9fg\" (UID: \"6bfdb6d3-4624-4365-8566-c81e229271da\") " pod="metallb-system/speaker-gs9fg" Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.788721 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bfdb6d3-4624-4365-8566-c81e229271da-metrics-certs\") pod \"speaker-gs9fg\" (UID: \"6bfdb6d3-4624-4365-8566-c81e229271da\") " pod="metallb-system/speaker-gs9fg" Jan 23 06:48:15 crc kubenswrapper[4937]: E0123 06:48:15.789926 4937 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 06:48:15 crc kubenswrapper[4937]: E0123 06:48:15.790007 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bfdb6d3-4624-4365-8566-c81e229271da-memberlist podName:6bfdb6d3-4624-4365-8566-c81e229271da nodeName:}" failed. No retries permitted until 2026-01-23 06:48:16.789988306 +0000 UTC m=+896.593754959 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6bfdb6d3-4624-4365-8566-c81e229271da-memberlist") pod "speaker-gs9fg" (UID: "6bfdb6d3-4624-4365-8566-c81e229271da") : secret "metallb-memberlist" not found Jan 23 06:48:15 crc kubenswrapper[4937]: I0123 06:48:15.794880 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bfdb6d3-4624-4365-8566-c81e229271da-metrics-certs\") pod \"speaker-gs9fg\" (UID: \"6bfdb6d3-4624-4365-8566-c81e229271da\") " pod="metallb-system/speaker-gs9fg" Jan 23 06:48:16 crc kubenswrapper[4937]: I0123 06:48:16.320702 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpm64" event={"ID":"b19c38c9-ee52-4326-8101-2148ef37acfc","Type":"ContainerStarted","Data":"61606db2b1d7cb662fcdef1fbc98818c2167bf17be7ff53209de86c4d7b57a6b"} Jan 23 06:48:16 crc kubenswrapper[4937]: I0123 06:48:16.321982 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p8z6s" event={"ID":"07fe67ed-5566-4458-9379-9440d8085315","Type":"ContainerStarted","Data":"7e326714fefaa1b4c5b54746ddd3f6319efc5de054ba7b4945c15b899a0c43c2"} Jan 23 06:48:16 crc kubenswrapper[4937]: I0123 06:48:16.324159 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-stcfs" event={"ID":"f1efc753-824c-42de-9e52-198864fee8e6","Type":"ContainerStarted","Data":"ff0e8fbe5da36991c933290643c631f3ed2fa62702ab877fd30b9f1013d2bd14"} Jan 23 06:48:16 crc kubenswrapper[4937]: I0123 06:48:16.324198 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-stcfs" event={"ID":"f1efc753-824c-42de-9e52-198864fee8e6","Type":"ContainerStarted","Data":"c4424aa6dd1ec8cb635590aea64250bc9e8c955bd7ee81d4a887e5e84c040a64"} Jan 23 06:48:16 crc kubenswrapper[4937]: I0123 06:48:16.324217 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-stcfs" event={"ID":"f1efc753-824c-42de-9e52-198864fee8e6","Type":"ContainerStarted","Data":"891125d578f5715ddbde818a571ace4830c9839a908e013114e233e9af1e2fb2"} Jan 23 06:48:16 crc kubenswrapper[4937]: I0123 06:48:16.324338 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-stcfs" Jan 23 06:48:16 crc kubenswrapper[4937]: I0123 06:48:16.345030 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-stcfs" podStartSLOduration=2.344998279 podStartE2EDuration="2.344998279s" podCreationTimestamp="2026-01-23 06:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:48:16.338160011 +0000 UTC m=+896.141926664" watchObservedRunningTime="2026-01-23 06:48:16.344998279 +0000 UTC m=+896.148764982" Jan 23 06:48:16 crc kubenswrapper[4937]: I0123 06:48:16.801028 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6bfdb6d3-4624-4365-8566-c81e229271da-memberlist\") pod \"speaker-gs9fg\" (UID: \"6bfdb6d3-4624-4365-8566-c81e229271da\") " pod="metallb-system/speaker-gs9fg" Jan 23 06:48:16 crc kubenswrapper[4937]: I0123 06:48:16.807479 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6bfdb6d3-4624-4365-8566-c81e229271da-memberlist\") pod \"speaker-gs9fg\" (UID: \"6bfdb6d3-4624-4365-8566-c81e229271da\") " pod="metallb-system/speaker-gs9fg" Jan 23 06:48:17 crc kubenswrapper[4937]: I0123 06:48:17.100137 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gs9fg" Jan 23 06:48:17 crc kubenswrapper[4937]: I0123 06:48:17.354275 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gs9fg" event={"ID":"6bfdb6d3-4624-4365-8566-c81e229271da","Type":"ContainerStarted","Data":"e34fa75e6d2075e5d011766eb5cdb70712f4357de4a92162ab6d6e23ce712fac"} Jan 23 06:48:18 crc kubenswrapper[4937]: I0123 06:48:18.359694 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gs9fg" event={"ID":"6bfdb6d3-4624-4365-8566-c81e229271da","Type":"ContainerStarted","Data":"ca9281bd93fd4c146a7792803e9ad751b3cdeee92c7edaed3e1b67051ae00637"} Jan 23 06:48:18 crc kubenswrapper[4937]: I0123 06:48:18.359971 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-gs9fg" Jan 23 06:48:18 crc kubenswrapper[4937]: I0123 06:48:18.359980 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gs9fg" event={"ID":"6bfdb6d3-4624-4365-8566-c81e229271da","Type":"ContainerStarted","Data":"347242bd35f12911d0d6126f6fdd98fe19c174d0a85757d80570bb3e2d0fe204"} Jan 23 06:48:18 crc kubenswrapper[4937]: I0123 06:48:18.375990 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-gs9fg" podStartSLOduration=4.375972322 podStartE2EDuration="4.375972322s" podCreationTimestamp="2026-01-23 06:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:48:18.373984527 +0000 UTC m=+898.177751180" watchObservedRunningTime="2026-01-23 06:48:18.375972322 +0000 UTC m=+898.179738975" Jan 23 06:48:24 crc kubenswrapper[4937]: I0123 06:48:24.409705 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p8z6s" event={"ID":"07fe67ed-5566-4458-9379-9440d8085315","Type":"ContainerStarted","Data":"8dde516c1d56fe9553c24e6bea26bf16e84169ab9854b72cebe6f16dd88c9bbb"} Jan 23 06:48:24 crc kubenswrapper[4937]: I0123 06:48:24.410244 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p8z6s" Jan 23 06:48:24 crc kubenswrapper[4937]: I0123 06:48:24.413528 4937 generic.go:334] "Generic (PLEG): container finished" podID="b19c38c9-ee52-4326-8101-2148ef37acfc" containerID="14fce8e7b80ef67576795d1d9509b083c415253a75d5ad9ad709f955715ac8d6" exitCode=0 Jan 23 06:48:24 crc kubenswrapper[4937]: I0123 06:48:24.413639 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpm64" event={"ID":"b19c38c9-ee52-4326-8101-2148ef37acfc","Type":"ContainerDied","Data":"14fce8e7b80ef67576795d1d9509b083c415253a75d5ad9ad709f955715ac8d6"} Jan 23 06:48:24 crc kubenswrapper[4937]: I0123 06:48:24.438050 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p8z6s" podStartSLOduration=2.538139474 podStartE2EDuration="10.438026262s" podCreationTimestamp="2026-01-23 06:48:14 +0000 UTC" firstStartedPulling="2026-01-23 06:48:15.714869602 +0000 UTC m=+895.518636255" lastFinishedPulling="2026-01-23 06:48:23.61475638 +0000 UTC m=+903.418523043" observedRunningTime="2026-01-23 06:48:24.434264029 +0000 UTC m=+904.238030712" watchObservedRunningTime="2026-01-23 06:48:24.438026262 +0000 UTC m=+904.241792945" Jan 23 06:48:25 crc kubenswrapper[4937]: I0123 06:48:25.318485 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-stcfs" Jan 23 06:48:25 crc kubenswrapper[4937]: I0123 06:48:25.421561 4937 generic.go:334] "Generic (PLEG): container finished" podID="b19c38c9-ee52-4326-8101-2148ef37acfc" containerID="2c0cface632381a441f5dd755deb663e18eb154535f780cb70407d020e58851d" exitCode=0 Jan 23 06:48:25 crc kubenswrapper[4937]: I0123 06:48:25.421633 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpm64" event={"ID":"b19c38c9-ee52-4326-8101-2148ef37acfc","Type":"ContainerDied","Data":"2c0cface632381a441f5dd755deb663e18eb154535f780cb70407d020e58851d"} Jan 23 06:48:26 crc kubenswrapper[4937]: I0123 06:48:26.436884 4937 generic.go:334] "Generic (PLEG): container finished" podID="b19c38c9-ee52-4326-8101-2148ef37acfc" containerID="4cee69d2b047251db22d6a09dd84772ac03a30209ba1fc36d1798e6375fc8aa7" exitCode=0 Jan 23 06:48:26 crc kubenswrapper[4937]: I0123 06:48:26.436947 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpm64" event={"ID":"b19c38c9-ee52-4326-8101-2148ef37acfc","Type":"ContainerDied","Data":"4cee69d2b047251db22d6a09dd84772ac03a30209ba1fc36d1798e6375fc8aa7"} Jan 23 06:48:27 crc kubenswrapper[4937]: I0123 06:48:27.105126 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-gs9fg" Jan 23 06:48:27 crc kubenswrapper[4937]: I0123 06:48:27.448361 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpm64" event={"ID":"b19c38c9-ee52-4326-8101-2148ef37acfc","Type":"ContainerStarted","Data":"b8497f670be4c50abadbac6252787accaf14cb18ed5d6f381cdca2e004a01807"} Jan 23 06:48:27 crc kubenswrapper[4937]: I0123 06:48:27.448436 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpm64" event={"ID":"b19c38c9-ee52-4326-8101-2148ef37acfc","Type":"ContainerStarted","Data":"25302354e9e9b40236c4e687e75c1ba0bb26f03161dd7addce82adbd6f7f9d6d"} Jan 23 06:48:27 crc kubenswrapper[4937]: I0123 06:48:27.448453 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpm64" event={"ID":"b19c38c9-ee52-4326-8101-2148ef37acfc","Type":"ContainerStarted","Data":"e0ed958878d8953aeea674a29851947e4c4b0f6ac3c001a4a830fa6e7730a4ca"} Jan 23 06:48:27 crc kubenswrapper[4937]: I0123 06:48:27.448464 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpm64" event={"ID":"b19c38c9-ee52-4326-8101-2148ef37acfc","Type":"ContainerStarted","Data":"474ef7dc8aab177c61227551827c3c3a54a545b59915a7f7d0d9b1aa99ea6735"} Jan 23 06:48:27 crc kubenswrapper[4937]: I0123 06:48:27.448477 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpm64" event={"ID":"b19c38c9-ee52-4326-8101-2148ef37acfc","Type":"ContainerStarted","Data":"685beb91c2e68773e762aef5ed72417ed70eaf022cc368b5f763572b7d88f367"} Jan 23 06:48:28 crc kubenswrapper[4937]: I0123 06:48:28.462281 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cpm64" event={"ID":"b19c38c9-ee52-4326-8101-2148ef37acfc","Type":"ContainerStarted","Data":"1099026f437d780820a4bd92c91526a86d58b42cf22fbfdbd53c5862b9144b89"} Jan 23 06:48:28 crc kubenswrapper[4937]: I0123 06:48:28.462554 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:28 crc kubenswrapper[4937]: I0123 06:48:28.505506 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-cpm64" podStartSLOduration=6.287061457 podStartE2EDuration="14.505472644s" podCreationTimestamp="2026-01-23 06:48:14 +0000 UTC" firstStartedPulling="2026-01-23 06:48:15.440760143 +0000 UTC m=+895.244526796" lastFinishedPulling="2026-01-23 06:48:23.65917132 +0000 UTC m=+903.462937983" observedRunningTime="2026-01-23 06:48:28.497843958 +0000 UTC m=+908.301610641" watchObservedRunningTime="2026-01-23 06:48:28.505472644 +0000 UTC m=+908.309239337" Jan 23 06:48:30 crc kubenswrapper[4937]: I0123 06:48:30.187676 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bfxp8"] Jan 23 06:48:30 crc kubenswrapper[4937]: I0123 06:48:30.189268 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bfxp8" Jan 23 06:48:30 crc kubenswrapper[4937]: I0123 06:48:30.196005 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 23 06:48:30 crc kubenswrapper[4937]: I0123 06:48:30.196240 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 23 06:48:30 crc kubenswrapper[4937]: I0123 06:48:30.196440 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-shldz" Jan 23 06:48:30 crc kubenswrapper[4937]: I0123 06:48:30.201482 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bfxp8"] Jan 23 06:48:30 crc kubenswrapper[4937]: I0123 06:48:30.286917 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:30 crc kubenswrapper[4937]: I0123 06:48:30.293764 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6rwm\" (UniqueName: \"kubernetes.io/projected/0612473c-71cb-4d94-8449-fb4c67766ea5-kube-api-access-j6rwm\") pod \"openstack-operator-index-bfxp8\" (UID: \"0612473c-71cb-4d94-8449-fb4c67766ea5\") " pod="openstack-operators/openstack-operator-index-bfxp8" Jan 23 06:48:30 crc kubenswrapper[4937]: I0123 06:48:30.322877 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:30 crc kubenswrapper[4937]: I0123 06:48:30.394616 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6rwm\" (UniqueName: \"kubernetes.io/projected/0612473c-71cb-4d94-8449-fb4c67766ea5-kube-api-access-j6rwm\") pod \"openstack-operator-index-bfxp8\" (UID: \"0612473c-71cb-4d94-8449-fb4c67766ea5\") " pod="openstack-operators/openstack-operator-index-bfxp8" Jan 23 06:48:30 crc kubenswrapper[4937]: I0123 06:48:30.416397 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6rwm\" (UniqueName: \"kubernetes.io/projected/0612473c-71cb-4d94-8449-fb4c67766ea5-kube-api-access-j6rwm\") pod \"openstack-operator-index-bfxp8\" (UID: \"0612473c-71cb-4d94-8449-fb4c67766ea5\") " pod="openstack-operators/openstack-operator-index-bfxp8" Jan 23 06:48:30 crc kubenswrapper[4937]: I0123 06:48:30.520674 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bfxp8" Jan 23 06:48:30 crc kubenswrapper[4937]: I0123 06:48:30.828758 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bfxp8"] Jan 23 06:48:31 crc kubenswrapper[4937]: I0123 06:48:31.484481 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bfxp8" event={"ID":"0612473c-71cb-4d94-8449-fb4c67766ea5","Type":"ContainerStarted","Data":"6bd1650e43888663c88e9c3f6e53048c38ff278d69b4072d8b2da205b332defc"} Jan 23 06:48:33 crc kubenswrapper[4937]: I0123 06:48:33.499656 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bfxp8" event={"ID":"0612473c-71cb-4d94-8449-fb4c67766ea5","Type":"ContainerStarted","Data":"099d424bd3b1d55066dd4ddaf293ef97de4c39a710631bdfb0614bb5588adab3"} Jan 23 06:48:33 crc kubenswrapper[4937]: I0123 06:48:33.523191 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bfxp8" podStartSLOduration=1.735239687 podStartE2EDuration="3.523169214s" podCreationTimestamp="2026-01-23 06:48:30 +0000 UTC" firstStartedPulling="2026-01-23 06:48:30.836922186 +0000 UTC m=+910.640688839" lastFinishedPulling="2026-01-23 06:48:32.624851713 +0000 UTC m=+912.428618366" observedRunningTime="2026-01-23 06:48:33.518937465 +0000 UTC m=+913.322704128" watchObservedRunningTime="2026-01-23 06:48:33.523169214 +0000 UTC m=+913.326935867" Jan 23 06:48:33 crc kubenswrapper[4937]: I0123 06:48:33.530347 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bfxp8"] Jan 23 06:48:34 crc kubenswrapper[4937]: I0123 06:48:34.131308 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-k5qb6"] Jan 23 06:48:34 crc kubenswrapper[4937]: I0123 06:48:34.132135 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k5qb6" Jan 23 06:48:34 crc kubenswrapper[4937]: I0123 06:48:34.153619 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k5qb6"] Jan 23 06:48:34 crc kubenswrapper[4937]: I0123 06:48:34.254432 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcz4n\" (UniqueName: \"kubernetes.io/projected/2fb91dbd-bdfe-4e1d-b114-e4be54c52afc-kube-api-access-jcz4n\") pod \"openstack-operator-index-k5qb6\" (UID: \"2fb91dbd-bdfe-4e1d-b114-e4be54c52afc\") " pod="openstack-operators/openstack-operator-index-k5qb6" Jan 23 06:48:34 crc kubenswrapper[4937]: I0123 06:48:34.356028 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcz4n\" (UniqueName: \"kubernetes.io/projected/2fb91dbd-bdfe-4e1d-b114-e4be54c52afc-kube-api-access-jcz4n\") pod \"openstack-operator-index-k5qb6\" (UID: \"2fb91dbd-bdfe-4e1d-b114-e4be54c52afc\") " pod="openstack-operators/openstack-operator-index-k5qb6" Jan 23 06:48:34 crc kubenswrapper[4937]: I0123 06:48:34.375404 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcz4n\" (UniqueName: \"kubernetes.io/projected/2fb91dbd-bdfe-4e1d-b114-e4be54c52afc-kube-api-access-jcz4n\") pod \"openstack-operator-index-k5qb6\" (UID: \"2fb91dbd-bdfe-4e1d-b114-e4be54c52afc\") " pod="openstack-operators/openstack-operator-index-k5qb6" Jan 23 06:48:34 crc kubenswrapper[4937]: I0123 06:48:34.458799 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k5qb6" Jan 23 06:48:34 crc kubenswrapper[4937]: I0123 06:48:34.738741 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k5qb6"] Jan 23 06:48:35 crc kubenswrapper[4937]: I0123 06:48:35.282336 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p8z6s" Jan 23 06:48:35 crc kubenswrapper[4937]: I0123 06:48:35.519445 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k5qb6" event={"ID":"2fb91dbd-bdfe-4e1d-b114-e4be54c52afc","Type":"ContainerStarted","Data":"d85b384c19b456e408e0a9cd7f9cc23df892c6140f1ee9918d5395ec5b204f36"} Jan 23 06:48:35 crc kubenswrapper[4937]: I0123 06:48:35.519545 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k5qb6" event={"ID":"2fb91dbd-bdfe-4e1d-b114-e4be54c52afc","Type":"ContainerStarted","Data":"32021f4f78c4f62fa9f22a5becfb616ce54f95a32847cb9d54920c2945243118"} Jan 23 06:48:35 crc kubenswrapper[4937]: I0123 06:48:35.519582 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-bfxp8" podUID="0612473c-71cb-4d94-8449-fb4c67766ea5" containerName="registry-server" containerID="cri-o://099d424bd3b1d55066dd4ddaf293ef97de4c39a710631bdfb0614bb5588adab3" gracePeriod=2 Jan 23 06:48:35 crc kubenswrapper[4937]: I0123 06:48:35.541871 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-k5qb6" podStartSLOduration=1.487182626 podStartE2EDuration="1.541844763s" podCreationTimestamp="2026-01-23 06:48:34 +0000 UTC" firstStartedPulling="2026-01-23 06:48:34.744156173 +0000 UTC m=+914.547922866" lastFinishedPulling="2026-01-23 06:48:34.79881836 +0000 UTC m=+914.602585003" observedRunningTime="2026-01-23 06:48:35.54058055 +0000 UTC m=+915.344347223" watchObservedRunningTime="2026-01-23 06:48:35.541844763 +0000 UTC m=+915.345611456" Jan 23 06:48:35 crc kubenswrapper[4937]: I0123 06:48:35.969417 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bfxp8" Jan 23 06:48:36 crc kubenswrapper[4937]: I0123 06:48:36.076376 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6rwm\" (UniqueName: \"kubernetes.io/projected/0612473c-71cb-4d94-8449-fb4c67766ea5-kube-api-access-j6rwm\") pod \"0612473c-71cb-4d94-8449-fb4c67766ea5\" (UID: \"0612473c-71cb-4d94-8449-fb4c67766ea5\") " Jan 23 06:48:36 crc kubenswrapper[4937]: I0123 06:48:36.086691 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0612473c-71cb-4d94-8449-fb4c67766ea5-kube-api-access-j6rwm" (OuterVolumeSpecName: "kube-api-access-j6rwm") pod "0612473c-71cb-4d94-8449-fb4c67766ea5" (UID: "0612473c-71cb-4d94-8449-fb4c67766ea5"). InnerVolumeSpecName "kube-api-access-j6rwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:48:36 crc kubenswrapper[4937]: I0123 06:48:36.178404 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6rwm\" (UniqueName: \"kubernetes.io/projected/0612473c-71cb-4d94-8449-fb4c67766ea5-kube-api-access-j6rwm\") on node \"crc\" DevicePath \"\"" Jan 23 06:48:36 crc kubenswrapper[4937]: I0123 06:48:36.528186 4937 generic.go:334] "Generic (PLEG): container finished" podID="0612473c-71cb-4d94-8449-fb4c67766ea5" containerID="099d424bd3b1d55066dd4ddaf293ef97de4c39a710631bdfb0614bb5588adab3" exitCode=0 Jan 23 06:48:36 crc kubenswrapper[4937]: I0123 06:48:36.528306 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bfxp8" Jan 23 06:48:36 crc kubenswrapper[4937]: I0123 06:48:36.540692 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bfxp8" event={"ID":"0612473c-71cb-4d94-8449-fb4c67766ea5","Type":"ContainerDied","Data":"099d424bd3b1d55066dd4ddaf293ef97de4c39a710631bdfb0614bb5588adab3"} Jan 23 06:48:36 crc kubenswrapper[4937]: I0123 06:48:36.540786 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bfxp8" event={"ID":"0612473c-71cb-4d94-8449-fb4c67766ea5","Type":"ContainerDied","Data":"6bd1650e43888663c88e9c3f6e53048c38ff278d69b4072d8b2da205b332defc"} Jan 23 06:48:36 crc kubenswrapper[4937]: I0123 06:48:36.540817 4937 scope.go:117] "RemoveContainer" containerID="099d424bd3b1d55066dd4ddaf293ef97de4c39a710631bdfb0614bb5588adab3" Jan 23 06:48:36 crc kubenswrapper[4937]: I0123 06:48:36.569334 4937 scope.go:117] "RemoveContainer" containerID="099d424bd3b1d55066dd4ddaf293ef97de4c39a710631bdfb0614bb5588adab3" Jan 23 06:48:36 crc kubenswrapper[4937]: E0123 06:48:36.570000 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"099d424bd3b1d55066dd4ddaf293ef97de4c39a710631bdfb0614bb5588adab3\": container with ID starting with 099d424bd3b1d55066dd4ddaf293ef97de4c39a710631bdfb0614bb5588adab3 not found: ID does not exist" containerID="099d424bd3b1d55066dd4ddaf293ef97de4c39a710631bdfb0614bb5588adab3" Jan 23 06:48:36 crc kubenswrapper[4937]: I0123 06:48:36.570056 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"099d424bd3b1d55066dd4ddaf293ef97de4c39a710631bdfb0614bb5588adab3"} err="failed to get container status \"099d424bd3b1d55066dd4ddaf293ef97de4c39a710631bdfb0614bb5588adab3\": rpc error: code = NotFound desc = could not find container \"099d424bd3b1d55066dd4ddaf293ef97de4c39a710631bdfb0614bb5588adab3\": container with ID starting with 099d424bd3b1d55066dd4ddaf293ef97de4c39a710631bdfb0614bb5588adab3 not found: ID does not exist" Jan 23 06:48:36 crc kubenswrapper[4937]: I0123 06:48:36.571442 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bfxp8"] Jan 23 06:48:36 crc kubenswrapper[4937]: I0123 06:48:36.575897 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-bfxp8"] Jan 23 06:48:38 crc kubenswrapper[4937]: I0123 06:48:38.545779 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0612473c-71cb-4d94-8449-fb4c67766ea5" path="/var/lib/kubelet/pods/0612473c-71cb-4d94-8449-fb4c67766ea5/volumes" Jan 23 06:48:44 crc kubenswrapper[4937]: I0123 06:48:44.459472 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-k5qb6" Jan 23 06:48:44 crc kubenswrapper[4937]: I0123 06:48:44.460302 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-k5qb6" Jan 23 06:48:44 crc kubenswrapper[4937]: I0123 06:48:44.504053 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-k5qb6" Jan 23 06:48:44 crc kubenswrapper[4937]: I0123 06:48:44.622369 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-k5qb6" Jan 23 06:48:45 crc kubenswrapper[4937]: I0123 06:48:45.289671 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-cpm64" Jan 23 06:48:45 crc kubenswrapper[4937]: I0123 06:48:45.756562 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n"] Jan 23 06:48:45 crc kubenswrapper[4937]: E0123 06:48:45.756822 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0612473c-71cb-4d94-8449-fb4c67766ea5" containerName="registry-server" Jan 23 06:48:45 crc kubenswrapper[4937]: I0123 06:48:45.756836 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="0612473c-71cb-4d94-8449-fb4c67766ea5" containerName="registry-server" Jan 23 06:48:45 crc kubenswrapper[4937]: I0123 06:48:45.756978 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="0612473c-71cb-4d94-8449-fb4c67766ea5" containerName="registry-server" Jan 23 06:48:45 crc kubenswrapper[4937]: I0123 06:48:45.757903 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n" Jan 23 06:48:45 crc kubenswrapper[4937]: I0123 06:48:45.759620 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-sdkkd" Jan 23 06:48:45 crc kubenswrapper[4937]: I0123 06:48:45.768063 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n"] Jan 23 06:48:45 crc kubenswrapper[4937]: I0123 06:48:45.804909 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffb38885-b679-49a3-9151-e2d6f4afaa8e-util\") pod \"c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n\" (UID: \"ffb38885-b679-49a3-9151-e2d6f4afaa8e\") " pod="openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n" Jan 23 06:48:45 crc kubenswrapper[4937]: I0123 06:48:45.804969 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtqnc\" (UniqueName: \"kubernetes.io/projected/ffb38885-b679-49a3-9151-e2d6f4afaa8e-kube-api-access-vtqnc\") pod \"c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n\" (UID: \"ffb38885-b679-49a3-9151-e2d6f4afaa8e\") " pod="openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n" Jan 23 06:48:45 crc kubenswrapper[4937]: I0123 06:48:45.805002 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffb38885-b679-49a3-9151-e2d6f4afaa8e-bundle\") pod \"c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n\" (UID: \"ffb38885-b679-49a3-9151-e2d6f4afaa8e\") " pod="openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n" Jan 23 06:48:45 crc kubenswrapper[4937]: I0123 06:48:45.905865 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffb38885-b679-49a3-9151-e2d6f4afaa8e-bundle\") pod \"c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n\" (UID: \"ffb38885-b679-49a3-9151-e2d6f4afaa8e\") " pod="openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n" Jan 23 06:48:45 crc kubenswrapper[4937]: I0123 06:48:45.905957 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffb38885-b679-49a3-9151-e2d6f4afaa8e-util\") pod \"c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n\" (UID: \"ffb38885-b679-49a3-9151-e2d6f4afaa8e\") " pod="openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n" Jan 23 06:48:45 crc kubenswrapper[4937]: I0123 06:48:45.906007 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtqnc\" (UniqueName: \"kubernetes.io/projected/ffb38885-b679-49a3-9151-e2d6f4afaa8e-kube-api-access-vtqnc\") pod \"c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n\" (UID: \"ffb38885-b679-49a3-9151-e2d6f4afaa8e\") " pod="openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n" Jan 23 06:48:45 crc kubenswrapper[4937]: I0123 06:48:45.906724 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffb38885-b679-49a3-9151-e2d6f4afaa8e-bundle\") pod \"c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n\" (UID: \"ffb38885-b679-49a3-9151-e2d6f4afaa8e\") " pod="openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n" Jan 23 06:48:45 crc kubenswrapper[4937]: I0123 06:48:45.906746 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffb38885-b679-49a3-9151-e2d6f4afaa8e-util\") pod \"c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n\" (UID: \"ffb38885-b679-49a3-9151-e2d6f4afaa8e\") " pod="openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n" Jan 23 06:48:45 crc kubenswrapper[4937]: I0123 06:48:45.924952 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtqnc\" (UniqueName: \"kubernetes.io/projected/ffb38885-b679-49a3-9151-e2d6f4afaa8e-kube-api-access-vtqnc\") pod \"c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n\" (UID: \"ffb38885-b679-49a3-9151-e2d6f4afaa8e\") " pod="openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n" Jan 23 06:48:46 crc kubenswrapper[4937]: I0123 06:48:46.079484 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n" Jan 23 06:48:46 crc kubenswrapper[4937]: I0123 06:48:46.809569 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n"] Jan 23 06:48:47 crc kubenswrapper[4937]: I0123 06:48:47.615919 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n" event={"ID":"ffb38885-b679-49a3-9151-e2d6f4afaa8e","Type":"ContainerStarted","Data":"03a1df33d56067cb961ae91061202762468422ae112578d4a100d4f1892d673d"} Jan 23 06:48:47 crc kubenswrapper[4937]: I0123 06:48:47.616261 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n" event={"ID":"ffb38885-b679-49a3-9151-e2d6f4afaa8e","Type":"ContainerStarted","Data":"cb44fdc06cad1434d4079299b23de264d46e897c1d219adfa5fbe45cf09ffafc"} Jan 23 06:48:48 crc kubenswrapper[4937]: I0123 06:48:48.627993 4937 generic.go:334] "Generic (PLEG): container finished" podID="ffb38885-b679-49a3-9151-e2d6f4afaa8e" containerID="03a1df33d56067cb961ae91061202762468422ae112578d4a100d4f1892d673d" exitCode=0 Jan 23 06:48:48 crc kubenswrapper[4937]: I0123 06:48:48.628093 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n" event={"ID":"ffb38885-b679-49a3-9151-e2d6f4afaa8e","Type":"ContainerDied","Data":"03a1df33d56067cb961ae91061202762468422ae112578d4a100d4f1892d673d"} Jan 23 06:48:49 crc kubenswrapper[4937]: I0123 06:48:49.639367 4937 generic.go:334] "Generic (PLEG): container finished" podID="ffb38885-b679-49a3-9151-e2d6f4afaa8e" containerID="1a2f36456fb3b61d2d8e98d805ab4418c32b8635f471f2ac7fcde837dbf09ed6" exitCode=0 Jan 23 06:48:49 crc kubenswrapper[4937]: I0123 06:48:49.639453 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n" event={"ID":"ffb38885-b679-49a3-9151-e2d6f4afaa8e","Type":"ContainerDied","Data":"1a2f36456fb3b61d2d8e98d805ab4418c32b8635f471f2ac7fcde837dbf09ed6"} Jan 23 06:48:50 crc kubenswrapper[4937]: I0123 06:48:50.647649 4937 generic.go:334] "Generic (PLEG): container finished" podID="ffb38885-b679-49a3-9151-e2d6f4afaa8e" containerID="41993b7b62bc4cb6d0da5868848115ec415a141207f00b8b4cee5a3b36b38bc1" exitCode=0 Jan 23 06:48:50 crc kubenswrapper[4937]: I0123 06:48:50.647715 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n" event={"ID":"ffb38885-b679-49a3-9151-e2d6f4afaa8e","Type":"ContainerDied","Data":"41993b7b62bc4cb6d0da5868848115ec415a141207f00b8b4cee5a3b36b38bc1"} Jan 23 06:48:51 crc kubenswrapper[4937]: I0123 06:48:51.918676 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n" Jan 23 06:48:51 crc kubenswrapper[4937]: I0123 06:48:51.989056 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtqnc\" (UniqueName: \"kubernetes.io/projected/ffb38885-b679-49a3-9151-e2d6f4afaa8e-kube-api-access-vtqnc\") pod \"ffb38885-b679-49a3-9151-e2d6f4afaa8e\" (UID: \"ffb38885-b679-49a3-9151-e2d6f4afaa8e\") " Jan 23 06:48:51 crc kubenswrapper[4937]: I0123 06:48:51.989199 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffb38885-b679-49a3-9151-e2d6f4afaa8e-util\") pod \"ffb38885-b679-49a3-9151-e2d6f4afaa8e\" (UID: \"ffb38885-b679-49a3-9151-e2d6f4afaa8e\") " Jan 23 06:48:51 crc kubenswrapper[4937]: I0123 06:48:51.989228 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffb38885-b679-49a3-9151-e2d6f4afaa8e-bundle\") pod \"ffb38885-b679-49a3-9151-e2d6f4afaa8e\" (UID: \"ffb38885-b679-49a3-9151-e2d6f4afaa8e\") " Jan 23 06:48:51 crc kubenswrapper[4937]: I0123 06:48:51.990264 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb38885-b679-49a3-9151-e2d6f4afaa8e-bundle" (OuterVolumeSpecName: "bundle") pod "ffb38885-b679-49a3-9151-e2d6f4afaa8e" (UID: "ffb38885-b679-49a3-9151-e2d6f4afaa8e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:48:51 crc kubenswrapper[4937]: I0123 06:48:51.996232 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb38885-b679-49a3-9151-e2d6f4afaa8e-kube-api-access-vtqnc" (OuterVolumeSpecName: "kube-api-access-vtqnc") pod "ffb38885-b679-49a3-9151-e2d6f4afaa8e" (UID: "ffb38885-b679-49a3-9151-e2d6f4afaa8e"). InnerVolumeSpecName "kube-api-access-vtqnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:48:52 crc kubenswrapper[4937]: I0123 06:48:52.003693 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb38885-b679-49a3-9151-e2d6f4afaa8e-util" (OuterVolumeSpecName: "util") pod "ffb38885-b679-49a3-9151-e2d6f4afaa8e" (UID: "ffb38885-b679-49a3-9151-e2d6f4afaa8e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:48:52 crc kubenswrapper[4937]: I0123 06:48:52.090838 4937 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ffb38885-b679-49a3-9151-e2d6f4afaa8e-util\") on node \"crc\" DevicePath \"\"" Jan 23 06:48:52 crc kubenswrapper[4937]: I0123 06:48:52.091092 4937 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ffb38885-b679-49a3-9151-e2d6f4afaa8e-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:48:52 crc kubenswrapper[4937]: I0123 06:48:52.091211 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtqnc\" (UniqueName: \"kubernetes.io/projected/ffb38885-b679-49a3-9151-e2d6f4afaa8e-kube-api-access-vtqnc\") on node \"crc\" DevicePath \"\"" Jan 23 06:48:52 crc kubenswrapper[4937]: I0123 06:48:52.666648 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n" event={"ID":"ffb38885-b679-49a3-9151-e2d6f4afaa8e","Type":"ContainerDied","Data":"cb44fdc06cad1434d4079299b23de264d46e897c1d219adfa5fbe45cf09ffafc"} Jan 23 06:48:52 crc kubenswrapper[4937]: I0123 06:48:52.666708 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb44fdc06cad1434d4079299b23de264d46e897c1d219adfa5fbe45cf09ffafc" Jan 23 06:48:52 crc kubenswrapper[4937]: I0123 06:48:52.666804 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n" Jan 23 06:48:54 crc kubenswrapper[4937]: I0123 06:48:54.806299 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t5ln9"] Jan 23 06:48:54 crc kubenswrapper[4937]: E0123 06:48:54.806807 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb38885-b679-49a3-9151-e2d6f4afaa8e" containerName="pull" Jan 23 06:48:54 crc kubenswrapper[4937]: I0123 06:48:54.806819 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb38885-b679-49a3-9151-e2d6f4afaa8e" containerName="pull" Jan 23 06:48:54 crc kubenswrapper[4937]: E0123 06:48:54.806836 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb38885-b679-49a3-9151-e2d6f4afaa8e" containerName="extract" Jan 23 06:48:54 crc kubenswrapper[4937]: I0123 06:48:54.806843 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb38885-b679-49a3-9151-e2d6f4afaa8e" containerName="extract" Jan 23 06:48:54 crc kubenswrapper[4937]: E0123 06:48:54.806860 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb38885-b679-49a3-9151-e2d6f4afaa8e" containerName="util" Jan 23 06:48:54 crc kubenswrapper[4937]: I0123 06:48:54.806865 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb38885-b679-49a3-9151-e2d6f4afaa8e" containerName="util" Jan 23 06:48:54 crc kubenswrapper[4937]: I0123 06:48:54.806987 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb38885-b679-49a3-9151-e2d6f4afaa8e" containerName="extract" Jan 23 06:48:54 crc kubenswrapper[4937]: I0123 06:48:54.807769 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5ln9" Jan 23 06:48:54 crc kubenswrapper[4937]: I0123 06:48:54.827063 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t5ln9"] Jan 23 06:48:54 crc kubenswrapper[4937]: I0123 06:48:54.934701 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d288a541-a62d-431f-849f-cde2fb0e63ec-catalog-content\") pod \"certified-operators-t5ln9\" (UID: \"d288a541-a62d-431f-849f-cde2fb0e63ec\") " pod="openshift-marketplace/certified-operators-t5ln9" Jan 23 06:48:54 crc kubenswrapper[4937]: I0123 06:48:54.935007 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4zgt\" (UniqueName: \"kubernetes.io/projected/d288a541-a62d-431f-849f-cde2fb0e63ec-kube-api-access-s4zgt\") pod \"certified-operators-t5ln9\" (UID: \"d288a541-a62d-431f-849f-cde2fb0e63ec\") " pod="openshift-marketplace/certified-operators-t5ln9" Jan 23 06:48:54 crc kubenswrapper[4937]: I0123 06:48:54.935127 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d288a541-a62d-431f-849f-cde2fb0e63ec-utilities\") pod \"certified-operators-t5ln9\" (UID: \"d288a541-a62d-431f-849f-cde2fb0e63ec\") " pod="openshift-marketplace/certified-operators-t5ln9" Jan 23 06:48:55 crc kubenswrapper[4937]: I0123 06:48:55.036843 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d288a541-a62d-431f-849f-cde2fb0e63ec-catalog-content\") pod \"certified-operators-t5ln9\" (UID: \"d288a541-a62d-431f-849f-cde2fb0e63ec\") " pod="openshift-marketplace/certified-operators-t5ln9" Jan 23 06:48:55 crc kubenswrapper[4937]: I0123 06:48:55.036893 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4zgt\" (UniqueName: \"kubernetes.io/projected/d288a541-a62d-431f-849f-cde2fb0e63ec-kube-api-access-s4zgt\") pod \"certified-operators-t5ln9\" (UID: \"d288a541-a62d-431f-849f-cde2fb0e63ec\") " pod="openshift-marketplace/certified-operators-t5ln9" Jan 23 06:48:55 crc kubenswrapper[4937]: I0123 06:48:55.036950 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d288a541-a62d-431f-849f-cde2fb0e63ec-utilities\") pod \"certified-operators-t5ln9\" (UID: \"d288a541-a62d-431f-849f-cde2fb0e63ec\") " pod="openshift-marketplace/certified-operators-t5ln9" Jan 23 06:48:55 crc kubenswrapper[4937]: I0123 06:48:55.037511 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d288a541-a62d-431f-849f-cde2fb0e63ec-catalog-content\") pod \"certified-operators-t5ln9\" (UID: \"d288a541-a62d-431f-849f-cde2fb0e63ec\") " pod="openshift-marketplace/certified-operators-t5ln9" Jan 23 06:48:55 crc kubenswrapper[4937]: I0123 06:48:55.037546 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d288a541-a62d-431f-849f-cde2fb0e63ec-utilities\") pod \"certified-operators-t5ln9\" (UID: \"d288a541-a62d-431f-849f-cde2fb0e63ec\") " pod="openshift-marketplace/certified-operators-t5ln9" Jan 23 06:48:55 crc kubenswrapper[4937]: I0123 06:48:55.056729 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4zgt\" (UniqueName: \"kubernetes.io/projected/d288a541-a62d-431f-849f-cde2fb0e63ec-kube-api-access-s4zgt\") pod \"certified-operators-t5ln9\" (UID: \"d288a541-a62d-431f-849f-cde2fb0e63ec\") " pod="openshift-marketplace/certified-operators-t5ln9" Jan 23 06:48:55 crc kubenswrapper[4937]: I0123 06:48:55.123870 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5ln9" Jan 23 06:48:55 crc kubenswrapper[4937]: I0123 06:48:55.432471 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t5ln9"] Jan 23 06:48:55 crc kubenswrapper[4937]: I0123 06:48:55.686051 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5ln9" event={"ID":"d288a541-a62d-431f-849f-cde2fb0e63ec","Type":"ContainerStarted","Data":"ed7397f44a8c37d6db957809c4cc3b07cfb04612850c8049072620525ed78b16"} Jan 23 06:48:56 crc kubenswrapper[4937]: I0123 06:48:56.694876 4937 generic.go:334] "Generic (PLEG): container finished" podID="d288a541-a62d-431f-849f-cde2fb0e63ec" containerID="657ef01cfe455d6b33e38e3cba4941e32a313894b4bef954787094a0b1b2b645" exitCode=0 Jan 23 06:48:56 crc kubenswrapper[4937]: I0123 06:48:56.695988 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5ln9" event={"ID":"d288a541-a62d-431f-849f-cde2fb0e63ec","Type":"ContainerDied","Data":"657ef01cfe455d6b33e38e3cba4941e32a313894b4bef954787094a0b1b2b645"} Jan 23 06:48:57 crc kubenswrapper[4937]: I0123 06:48:57.705640 4937 generic.go:334] "Generic (PLEG): container finished" podID="d288a541-a62d-431f-849f-cde2fb0e63ec" containerID="e05062614eec23a42c2ff2a7b656c1421f527e440c0898009b0162ce5967070f" exitCode=0 Jan 23 06:48:57 crc kubenswrapper[4937]: I0123 06:48:57.705744 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5ln9" event={"ID":"d288a541-a62d-431f-849f-cde2fb0e63ec","Type":"ContainerDied","Data":"e05062614eec23a42c2ff2a7b656c1421f527e440c0898009b0162ce5967070f"} Jan 23 06:48:58 crc kubenswrapper[4937]: I0123 06:48:58.094470 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-58865b47f6-csc5n"] Jan 23 06:48:58 crc kubenswrapper[4937]: I0123 06:48:58.095244 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-58865b47f6-csc5n" Jan 23 06:48:58 crc kubenswrapper[4937]: I0123 06:48:58.097167 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-bzdgm" Jan 23 06:48:58 crc kubenswrapper[4937]: I0123 06:48:58.113182 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-58865b47f6-csc5n"] Jan 23 06:48:58 crc kubenswrapper[4937]: I0123 06:48:58.208826 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwcz5\" (UniqueName: \"kubernetes.io/projected/b459659a-4585-4a0b-86ca-c8aa91b81445-kube-api-access-wwcz5\") pod \"openstack-operator-controller-init-58865b47f6-csc5n\" (UID: \"b459659a-4585-4a0b-86ca-c8aa91b81445\") " pod="openstack-operators/openstack-operator-controller-init-58865b47f6-csc5n" Jan 23 06:48:58 crc kubenswrapper[4937]: I0123 06:48:58.310473 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwcz5\" (UniqueName: \"kubernetes.io/projected/b459659a-4585-4a0b-86ca-c8aa91b81445-kube-api-access-wwcz5\") pod \"openstack-operator-controller-init-58865b47f6-csc5n\" (UID: \"b459659a-4585-4a0b-86ca-c8aa91b81445\") " pod="openstack-operators/openstack-operator-controller-init-58865b47f6-csc5n" Jan 23 06:48:58 crc kubenswrapper[4937]: I0123 06:48:58.331273 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwcz5\" (UniqueName: \"kubernetes.io/projected/b459659a-4585-4a0b-86ca-c8aa91b81445-kube-api-access-wwcz5\") pod \"openstack-operator-controller-init-58865b47f6-csc5n\" (UID: \"b459659a-4585-4a0b-86ca-c8aa91b81445\") " pod="openstack-operators/openstack-operator-controller-init-58865b47f6-csc5n" Jan 23 06:48:58 crc kubenswrapper[4937]: I0123 06:48:58.413813 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-58865b47f6-csc5n" Jan 23 06:48:58 crc kubenswrapper[4937]: I0123 06:48:58.716869 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5ln9" event={"ID":"d288a541-a62d-431f-849f-cde2fb0e63ec","Type":"ContainerStarted","Data":"d849fc8fa8e15e5331c12a6aa6d98d76756e104c69b438e38d2bd15ae5dbf7b6"} Jan 23 06:48:58 crc kubenswrapper[4937]: I0123 06:48:58.731775 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t5ln9" podStartSLOduration=3.32446117 podStartE2EDuration="4.731758766s" podCreationTimestamp="2026-01-23 06:48:54 +0000 UTC" firstStartedPulling="2026-01-23 06:48:56.697742583 +0000 UTC m=+936.501509256" lastFinishedPulling="2026-01-23 06:48:58.105040199 +0000 UTC m=+937.908806852" observedRunningTime="2026-01-23 06:48:58.730168665 +0000 UTC m=+938.533935318" watchObservedRunningTime="2026-01-23 06:48:58.731758766 +0000 UTC m=+938.535525419" Jan 23 06:48:58 crc kubenswrapper[4937]: I0123 06:48:58.926292 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-58865b47f6-csc5n"] Jan 23 06:48:58 crc kubenswrapper[4937]: W0123 06:48:58.931543 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb459659a_4585_4a0b_86ca_c8aa91b81445.slice/crio-074a669057445ff7016506b5b54c9cd324e947c2a5e7d01a12805d7c560495cc WatchSource:0}: Error finding container 074a669057445ff7016506b5b54c9cd324e947c2a5e7d01a12805d7c560495cc: Status 404 returned error can't find the container with id 074a669057445ff7016506b5b54c9cd324e947c2a5e7d01a12805d7c560495cc Jan 23 06:48:59 crc kubenswrapper[4937]: I0123 06:48:59.726417 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-58865b47f6-csc5n" event={"ID":"b459659a-4585-4a0b-86ca-c8aa91b81445","Type":"ContainerStarted","Data":"074a669057445ff7016506b5b54c9cd324e947c2a5e7d01a12805d7c560495cc"} Jan 23 06:49:00 crc kubenswrapper[4937]: I0123 06:49:00.983029 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4579x"] Jan 23 06:49:00 crc kubenswrapper[4937]: I0123 06:49:00.986041 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4579x" Jan 23 06:49:00 crc kubenswrapper[4937]: I0123 06:49:00.990861 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4579x"] Jan 23 06:49:01 crc kubenswrapper[4937]: I0123 06:49:01.147691 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5daa6b49-38c7-4ec9-995f-3c791505597e-utilities\") pod \"redhat-marketplace-4579x\" (UID: \"5daa6b49-38c7-4ec9-995f-3c791505597e\") " pod="openshift-marketplace/redhat-marketplace-4579x" Jan 23 06:49:01 crc kubenswrapper[4937]: I0123 06:49:01.147756 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh6nx\" (UniqueName: \"kubernetes.io/projected/5daa6b49-38c7-4ec9-995f-3c791505597e-kube-api-access-sh6nx\") pod \"redhat-marketplace-4579x\" (UID: \"5daa6b49-38c7-4ec9-995f-3c791505597e\") " pod="openshift-marketplace/redhat-marketplace-4579x" Jan 23 06:49:01 crc kubenswrapper[4937]: I0123 06:49:01.147778 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5daa6b49-38c7-4ec9-995f-3c791505597e-catalog-content\") pod \"redhat-marketplace-4579x\" (UID: \"5daa6b49-38c7-4ec9-995f-3c791505597e\") " pod="openshift-marketplace/redhat-marketplace-4579x" Jan 23 06:49:01 crc kubenswrapper[4937]: I0123 06:49:01.248883 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5daa6b49-38c7-4ec9-995f-3c791505597e-utilities\") pod \"redhat-marketplace-4579x\" (UID: \"5daa6b49-38c7-4ec9-995f-3c791505597e\") " pod="openshift-marketplace/redhat-marketplace-4579x" Jan 23 06:49:01 crc kubenswrapper[4937]: I0123 06:49:01.248935 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh6nx\" (UniqueName: \"kubernetes.io/projected/5daa6b49-38c7-4ec9-995f-3c791505597e-kube-api-access-sh6nx\") pod \"redhat-marketplace-4579x\" (UID: \"5daa6b49-38c7-4ec9-995f-3c791505597e\") " pod="openshift-marketplace/redhat-marketplace-4579x" Jan 23 06:49:01 crc kubenswrapper[4937]: I0123 06:49:01.248960 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5daa6b49-38c7-4ec9-995f-3c791505597e-catalog-content\") pod \"redhat-marketplace-4579x\" (UID: \"5daa6b49-38c7-4ec9-995f-3c791505597e\") " pod="openshift-marketplace/redhat-marketplace-4579x" Jan 23 06:49:01 crc kubenswrapper[4937]: I0123 06:49:01.249411 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5daa6b49-38c7-4ec9-995f-3c791505597e-utilities\") pod \"redhat-marketplace-4579x\" (UID: \"5daa6b49-38c7-4ec9-995f-3c791505597e\") " pod="openshift-marketplace/redhat-marketplace-4579x" Jan 23 06:49:01 crc kubenswrapper[4937]: I0123 06:49:01.249432 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5daa6b49-38c7-4ec9-995f-3c791505597e-catalog-content\") pod \"redhat-marketplace-4579x\" (UID: \"5daa6b49-38c7-4ec9-995f-3c791505597e\") " pod="openshift-marketplace/redhat-marketplace-4579x" Jan 23 06:49:01 crc kubenswrapper[4937]: I0123 06:49:01.283755 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh6nx\" (UniqueName: \"kubernetes.io/projected/5daa6b49-38c7-4ec9-995f-3c791505597e-kube-api-access-sh6nx\") pod \"redhat-marketplace-4579x\" (UID: \"5daa6b49-38c7-4ec9-995f-3c791505597e\") " pod="openshift-marketplace/redhat-marketplace-4579x" Jan 23 06:49:01 crc kubenswrapper[4937]: I0123 06:49:01.340984 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4579x" Jan 23 06:49:02 crc kubenswrapper[4937]: I0123 06:49:02.991735 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4579x"] Jan 23 06:49:02 crc kubenswrapper[4937]: W0123 06:49:02.995991 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5daa6b49_38c7_4ec9_995f_3c791505597e.slice/crio-e7a4d7b97396d9703ce3a8632dfc1a3337868d9929d17fe599be254e52dab582 WatchSource:0}: Error finding container e7a4d7b97396d9703ce3a8632dfc1a3337868d9929d17fe599be254e52dab582: Status 404 returned error can't find the container with id e7a4d7b97396d9703ce3a8632dfc1a3337868d9929d17fe599be254e52dab582 Jan 23 06:49:03 crc kubenswrapper[4937]: I0123 06:49:03.757754 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-58865b47f6-csc5n" event={"ID":"b459659a-4585-4a0b-86ca-c8aa91b81445","Type":"ContainerStarted","Data":"540921a96023af6e5dbdf3a1aca832aa29ca28d54ddbf52dff5609bbcc0eea13"} Jan 23 06:49:03 crc kubenswrapper[4937]: I0123 06:49:03.758185 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-58865b47f6-csc5n" Jan 23 06:49:03 crc kubenswrapper[4937]: I0123 06:49:03.759205 4937 generic.go:334] "Generic (PLEG): container finished" podID="5daa6b49-38c7-4ec9-995f-3c791505597e" containerID="d31579dcef68c07bc107b7c1833a9a7abb53550ddbf6a20e0c35ce2d79b61660" exitCode=0 Jan 23 06:49:03 crc kubenswrapper[4937]: I0123 06:49:03.759263 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4579x" event={"ID":"5daa6b49-38c7-4ec9-995f-3c791505597e","Type":"ContainerDied","Data":"d31579dcef68c07bc107b7c1833a9a7abb53550ddbf6a20e0c35ce2d79b61660"} Jan 23 06:49:03 crc kubenswrapper[4937]: I0123 06:49:03.759325 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4579x" event={"ID":"5daa6b49-38c7-4ec9-995f-3c791505597e","Type":"ContainerStarted","Data":"e7a4d7b97396d9703ce3a8632dfc1a3337868d9929d17fe599be254e52dab582"} Jan 23 06:49:03 crc kubenswrapper[4937]: I0123 06:49:03.794797 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-58865b47f6-csc5n" podStartSLOduration=1.8664384809999999 podStartE2EDuration="5.794776332s" podCreationTimestamp="2026-01-23 06:48:58 +0000 UTC" firstStartedPulling="2026-01-23 06:48:58.934782373 +0000 UTC m=+938.738549026" lastFinishedPulling="2026-01-23 06:49:02.863120224 +0000 UTC m=+942.666886877" observedRunningTime="2026-01-23 06:49:03.789480857 +0000 UTC m=+943.593247520" watchObservedRunningTime="2026-01-23 06:49:03.794776332 +0000 UTC m=+943.598542995" Jan 23 06:49:04 crc kubenswrapper[4937]: I0123 06:49:04.771086 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4579x" event={"ID":"5daa6b49-38c7-4ec9-995f-3c791505597e","Type":"ContainerStarted","Data":"15bfb40e3ea31b75d62308de4f1df3b07199a3d4c1e0643be401c0daa80239f1"} Jan 23 06:49:05 crc kubenswrapper[4937]: I0123 06:49:05.124555 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t5ln9" Jan 23 06:49:05 crc kubenswrapper[4937]: I0123 06:49:05.124626 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t5ln9" Jan 23 06:49:05 crc kubenswrapper[4937]: I0123 06:49:05.183581 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t5ln9" Jan 23 06:49:05 crc kubenswrapper[4937]: I0123 06:49:05.782090 4937 generic.go:334] "Generic (PLEG): container finished" podID="5daa6b49-38c7-4ec9-995f-3c791505597e" containerID="15bfb40e3ea31b75d62308de4f1df3b07199a3d4c1e0643be401c0daa80239f1" exitCode=0 Jan 23 06:49:05 crc kubenswrapper[4937]: I0123 06:49:05.782266 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4579x" event={"ID":"5daa6b49-38c7-4ec9-995f-3c791505597e","Type":"ContainerDied","Data":"15bfb40e3ea31b75d62308de4f1df3b07199a3d4c1e0643be401c0daa80239f1"} Jan 23 06:49:05 crc kubenswrapper[4937]: I0123 06:49:05.861573 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t5ln9" Jan 23 06:49:06 crc kubenswrapper[4937]: I0123 06:49:06.790225 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4579x" event={"ID":"5daa6b49-38c7-4ec9-995f-3c791505597e","Type":"ContainerStarted","Data":"14d12ad831318e22d5afc2d6689cbe3fd7e210e748e467b7e88a455129c78535"} Jan 23 06:49:06 crc kubenswrapper[4937]: I0123 06:49:06.812324 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4579x" podStartSLOduration=4.243576759 podStartE2EDuration="6.812309111s" podCreationTimestamp="2026-01-23 06:49:00 +0000 UTC" firstStartedPulling="2026-01-23 06:49:03.761187548 +0000 UTC m=+943.564954201" lastFinishedPulling="2026-01-23 06:49:06.32991989 +0000 UTC m=+946.133686553" observedRunningTime="2026-01-23 06:49:06.809671463 +0000 UTC m=+946.613438126" watchObservedRunningTime="2026-01-23 06:49:06.812309111 +0000 UTC m=+946.616075764" Jan 23 06:49:08 crc kubenswrapper[4937]: I0123 06:49:08.416950 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-58865b47f6-csc5n" Jan 23 06:49:08 crc kubenswrapper[4937]: I0123 06:49:08.772189 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t5ln9"] Jan 23 06:49:08 crc kubenswrapper[4937]: I0123 06:49:08.772522 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t5ln9" podUID="d288a541-a62d-431f-849f-cde2fb0e63ec" containerName="registry-server" containerID="cri-o://d849fc8fa8e15e5331c12a6aa6d98d76756e104c69b438e38d2bd15ae5dbf7b6" gracePeriod=2 Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.616027 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5ln9" Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.768280 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4zgt\" (UniqueName: \"kubernetes.io/projected/d288a541-a62d-431f-849f-cde2fb0e63ec-kube-api-access-s4zgt\") pod \"d288a541-a62d-431f-849f-cde2fb0e63ec\" (UID: \"d288a541-a62d-431f-849f-cde2fb0e63ec\") " Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.768363 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d288a541-a62d-431f-849f-cde2fb0e63ec-catalog-content\") pod \"d288a541-a62d-431f-849f-cde2fb0e63ec\" (UID: \"d288a541-a62d-431f-849f-cde2fb0e63ec\") " Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.768444 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d288a541-a62d-431f-849f-cde2fb0e63ec-utilities\") pod \"d288a541-a62d-431f-849f-cde2fb0e63ec\" (UID: \"d288a541-a62d-431f-849f-cde2fb0e63ec\") " Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.769455 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d288a541-a62d-431f-849f-cde2fb0e63ec-utilities" (OuterVolumeSpecName: "utilities") pod "d288a541-a62d-431f-849f-cde2fb0e63ec" (UID: "d288a541-a62d-431f-849f-cde2fb0e63ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.769805 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d288a541-a62d-431f-849f-cde2fb0e63ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.786815 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d288a541-a62d-431f-849f-cde2fb0e63ec-kube-api-access-s4zgt" (OuterVolumeSpecName: "kube-api-access-s4zgt") pod "d288a541-a62d-431f-849f-cde2fb0e63ec" (UID: "d288a541-a62d-431f-849f-cde2fb0e63ec"). InnerVolumeSpecName "kube-api-access-s4zgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.809985 4937 generic.go:334] "Generic (PLEG): container finished" podID="d288a541-a62d-431f-849f-cde2fb0e63ec" containerID="d849fc8fa8e15e5331c12a6aa6d98d76756e104c69b438e38d2bd15ae5dbf7b6" exitCode=0 Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.810027 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5ln9" event={"ID":"d288a541-a62d-431f-849f-cde2fb0e63ec","Type":"ContainerDied","Data":"d849fc8fa8e15e5331c12a6aa6d98d76756e104c69b438e38d2bd15ae5dbf7b6"} Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.810053 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t5ln9" event={"ID":"d288a541-a62d-431f-849f-cde2fb0e63ec","Type":"ContainerDied","Data":"ed7397f44a8c37d6db957809c4cc3b07cfb04612850c8049072620525ed78b16"} Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.810068 4937 scope.go:117] "RemoveContainer" containerID="d849fc8fa8e15e5331c12a6aa6d98d76756e104c69b438e38d2bd15ae5dbf7b6" Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.810201 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t5ln9" Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.813761 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d288a541-a62d-431f-849f-cde2fb0e63ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d288a541-a62d-431f-849f-cde2fb0e63ec" (UID: "d288a541-a62d-431f-849f-cde2fb0e63ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.824961 4937 scope.go:117] "RemoveContainer" containerID="e05062614eec23a42c2ff2a7b656c1421f527e440c0898009b0162ce5967070f" Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.846901 4937 scope.go:117] "RemoveContainer" containerID="657ef01cfe455d6b33e38e3cba4941e32a313894b4bef954787094a0b1b2b645" Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.871229 4937 scope.go:117] "RemoveContainer" containerID="d849fc8fa8e15e5331c12a6aa6d98d76756e104c69b438e38d2bd15ae5dbf7b6" Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.871245 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d288a541-a62d-431f-849f-cde2fb0e63ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.871363 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4zgt\" (UniqueName: \"kubernetes.io/projected/d288a541-a62d-431f-849f-cde2fb0e63ec-kube-api-access-s4zgt\") on node \"crc\" DevicePath \"\"" Jan 23 06:49:09 crc kubenswrapper[4937]: E0123 06:49:09.871772 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d849fc8fa8e15e5331c12a6aa6d98d76756e104c69b438e38d2bd15ae5dbf7b6\": container with ID starting with d849fc8fa8e15e5331c12a6aa6d98d76756e104c69b438e38d2bd15ae5dbf7b6 not found: ID does not exist" containerID="d849fc8fa8e15e5331c12a6aa6d98d76756e104c69b438e38d2bd15ae5dbf7b6" Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.872866 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d849fc8fa8e15e5331c12a6aa6d98d76756e104c69b438e38d2bd15ae5dbf7b6"} err="failed to get container status \"d849fc8fa8e15e5331c12a6aa6d98d76756e104c69b438e38d2bd15ae5dbf7b6\": rpc error: code = NotFound desc = could not find container \"d849fc8fa8e15e5331c12a6aa6d98d76756e104c69b438e38d2bd15ae5dbf7b6\": container with ID starting with d849fc8fa8e15e5331c12a6aa6d98d76756e104c69b438e38d2bd15ae5dbf7b6 not found: ID does not exist" Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.872996 4937 scope.go:117] "RemoveContainer" containerID="e05062614eec23a42c2ff2a7b656c1421f527e440c0898009b0162ce5967070f" Jan 23 06:49:09 crc kubenswrapper[4937]: E0123 06:49:09.873451 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e05062614eec23a42c2ff2a7b656c1421f527e440c0898009b0162ce5967070f\": container with ID starting with e05062614eec23a42c2ff2a7b656c1421f527e440c0898009b0162ce5967070f not found: ID does not exist" containerID="e05062614eec23a42c2ff2a7b656c1421f527e440c0898009b0162ce5967070f" Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.873483 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e05062614eec23a42c2ff2a7b656c1421f527e440c0898009b0162ce5967070f"} err="failed to get container status \"e05062614eec23a42c2ff2a7b656c1421f527e440c0898009b0162ce5967070f\": rpc error: code = NotFound desc = could not find container \"e05062614eec23a42c2ff2a7b656c1421f527e440c0898009b0162ce5967070f\": container with ID starting with e05062614eec23a42c2ff2a7b656c1421f527e440c0898009b0162ce5967070f not found: ID does not exist" Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.873511 4937 scope.go:117] "RemoveContainer" containerID="657ef01cfe455d6b33e38e3cba4941e32a313894b4bef954787094a0b1b2b645" Jan 23 06:49:09 crc kubenswrapper[4937]: E0123 06:49:09.873938 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"657ef01cfe455d6b33e38e3cba4941e32a313894b4bef954787094a0b1b2b645\": container with ID starting with 657ef01cfe455d6b33e38e3cba4941e32a313894b4bef954787094a0b1b2b645 not found: ID does not exist" containerID="657ef01cfe455d6b33e38e3cba4941e32a313894b4bef954787094a0b1b2b645" Jan 23 06:49:09 crc kubenswrapper[4937]: I0123 06:49:09.874062 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"657ef01cfe455d6b33e38e3cba4941e32a313894b4bef954787094a0b1b2b645"} err="failed to get container status \"657ef01cfe455d6b33e38e3cba4941e32a313894b4bef954787094a0b1b2b645\": rpc error: code = NotFound desc = could not find container \"657ef01cfe455d6b33e38e3cba4941e32a313894b4bef954787094a0b1b2b645\": container with ID starting with 657ef01cfe455d6b33e38e3cba4941e32a313894b4bef954787094a0b1b2b645 not found: ID does not exist" Jan 23 06:49:10 crc kubenswrapper[4937]: I0123 06:49:10.167218 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t5ln9"] Jan 23 06:49:10 crc kubenswrapper[4937]: I0123 06:49:10.179627 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t5ln9"] Jan 23 06:49:10 crc kubenswrapper[4937]: I0123 06:49:10.535387 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d288a541-a62d-431f-849f-cde2fb0e63ec" path="/var/lib/kubelet/pods/d288a541-a62d-431f-849f-cde2fb0e63ec/volumes" Jan 23 06:49:11 crc kubenswrapper[4937]: I0123 06:49:11.341574 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4579x" Jan 23 06:49:11 crc kubenswrapper[4937]: I0123 06:49:11.347410 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4579x" Jan 23 06:49:11 crc kubenswrapper[4937]: I0123 06:49:11.417028 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4579x" Jan 23 06:49:11 crc kubenswrapper[4937]: I0123 06:49:11.888349 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4579x" Jan 23 06:49:12 crc kubenswrapper[4937]: I0123 06:49:12.968702 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4579x"] Jan 23 06:49:14 crc kubenswrapper[4937]: I0123 06:49:14.844768 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4579x" podUID="5daa6b49-38c7-4ec9-995f-3c791505597e" containerName="registry-server" containerID="cri-o://14d12ad831318e22d5afc2d6689cbe3fd7e210e748e467b7e88a455129c78535" gracePeriod=2 Jan 23 06:49:15 crc kubenswrapper[4937]: I0123 06:49:15.862938 4937 generic.go:334] "Generic (PLEG): container finished" podID="5daa6b49-38c7-4ec9-995f-3c791505597e" containerID="14d12ad831318e22d5afc2d6689cbe3fd7e210e748e467b7e88a455129c78535" exitCode=0 Jan 23 06:49:15 crc kubenswrapper[4937]: I0123 06:49:15.863111 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4579x" event={"ID":"5daa6b49-38c7-4ec9-995f-3c791505597e","Type":"ContainerDied","Data":"14d12ad831318e22d5afc2d6689cbe3fd7e210e748e467b7e88a455129c78535"} Jan 23 06:49:15 crc kubenswrapper[4937]: I0123 06:49:15.946979 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4579x" Jan 23 06:49:16 crc kubenswrapper[4937]: I0123 06:49:16.055846 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5daa6b49-38c7-4ec9-995f-3c791505597e-utilities\") pod \"5daa6b49-38c7-4ec9-995f-3c791505597e\" (UID: \"5daa6b49-38c7-4ec9-995f-3c791505597e\") " Jan 23 06:49:16 crc kubenswrapper[4937]: I0123 06:49:16.055961 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5daa6b49-38c7-4ec9-995f-3c791505597e-catalog-content\") pod \"5daa6b49-38c7-4ec9-995f-3c791505597e\" (UID: \"5daa6b49-38c7-4ec9-995f-3c791505597e\") " Jan 23 06:49:16 crc kubenswrapper[4937]: I0123 06:49:16.056687 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh6nx\" (UniqueName: \"kubernetes.io/projected/5daa6b49-38c7-4ec9-995f-3c791505597e-kube-api-access-sh6nx\") pod \"5daa6b49-38c7-4ec9-995f-3c791505597e\" (UID: \"5daa6b49-38c7-4ec9-995f-3c791505597e\") " Jan 23 06:49:16 crc kubenswrapper[4937]: I0123 06:49:16.056746 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5daa6b49-38c7-4ec9-995f-3c791505597e-utilities" (OuterVolumeSpecName: "utilities") pod "5daa6b49-38c7-4ec9-995f-3c791505597e" (UID: "5daa6b49-38c7-4ec9-995f-3c791505597e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:49:16 crc kubenswrapper[4937]: I0123 06:49:16.056913 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5daa6b49-38c7-4ec9-995f-3c791505597e-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:49:16 crc kubenswrapper[4937]: I0123 06:49:16.068816 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5daa6b49-38c7-4ec9-995f-3c791505597e-kube-api-access-sh6nx" (OuterVolumeSpecName: "kube-api-access-sh6nx") pod "5daa6b49-38c7-4ec9-995f-3c791505597e" (UID: "5daa6b49-38c7-4ec9-995f-3c791505597e"). InnerVolumeSpecName "kube-api-access-sh6nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:49:16 crc kubenswrapper[4937]: I0123 06:49:16.078221 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5daa6b49-38c7-4ec9-995f-3c791505597e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5daa6b49-38c7-4ec9-995f-3c791505597e" (UID: "5daa6b49-38c7-4ec9-995f-3c791505597e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:49:16 crc kubenswrapper[4937]: I0123 06:49:16.157880 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5daa6b49-38c7-4ec9-995f-3c791505597e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:49:16 crc kubenswrapper[4937]: I0123 06:49:16.158183 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh6nx\" (UniqueName: \"kubernetes.io/projected/5daa6b49-38c7-4ec9-995f-3c791505597e-kube-api-access-sh6nx\") on node \"crc\" DevicePath \"\"" Jan 23 06:49:16 crc kubenswrapper[4937]: I0123 06:49:16.871125 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4579x" event={"ID":"5daa6b49-38c7-4ec9-995f-3c791505597e","Type":"ContainerDied","Data":"e7a4d7b97396d9703ce3a8632dfc1a3337868d9929d17fe599be254e52dab582"} Jan 23 06:49:16 crc kubenswrapper[4937]: I0123 06:49:16.871767 4937 scope.go:117] "RemoveContainer" containerID="14d12ad831318e22d5afc2d6689cbe3fd7e210e748e467b7e88a455129c78535" Jan 23 06:49:16 crc kubenswrapper[4937]: I0123 06:49:16.871164 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4579x" Jan 23 06:49:16 crc kubenswrapper[4937]: I0123 06:49:16.893680 4937 scope.go:117] "RemoveContainer" containerID="15bfb40e3ea31b75d62308de4f1df3b07199a3d4c1e0643be401c0daa80239f1" Jan 23 06:49:16 crc kubenswrapper[4937]: I0123 06:49:16.894697 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4579x"] Jan 23 06:49:16 crc kubenswrapper[4937]: I0123 06:49:16.901026 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4579x"] Jan 23 06:49:16 crc kubenswrapper[4937]: I0123 06:49:16.908008 4937 scope.go:117] "RemoveContainer" containerID="d31579dcef68c07bc107b7c1833a9a7abb53550ddbf6a20e0c35ce2d79b61660" Jan 23 06:49:18 crc kubenswrapper[4937]: I0123 06:49:18.533581 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5daa6b49-38c7-4ec9-995f-3c791505597e" path="/var/lib/kubelet/pods/5daa6b49-38c7-4ec9-995f-3c791505597e/volumes" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.386931 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-qkqbb"] Jan 23 06:49:26 crc kubenswrapper[4937]: E0123 06:49:26.387817 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5daa6b49-38c7-4ec9-995f-3c791505597e" containerName="extract-content" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.387834 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5daa6b49-38c7-4ec9-995f-3c791505597e" containerName="extract-content" Jan 23 06:49:26 crc kubenswrapper[4937]: E0123 06:49:26.387851 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d288a541-a62d-431f-849f-cde2fb0e63ec" containerName="extract-utilities" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.387858 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d288a541-a62d-431f-849f-cde2fb0e63ec" containerName="extract-utilities" Jan 23 06:49:26 crc kubenswrapper[4937]: E0123 06:49:26.387879 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d288a541-a62d-431f-849f-cde2fb0e63ec" containerName="extract-content" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.387889 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d288a541-a62d-431f-849f-cde2fb0e63ec" containerName="extract-content" Jan 23 06:49:26 crc kubenswrapper[4937]: E0123 06:49:26.387898 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5daa6b49-38c7-4ec9-995f-3c791505597e" containerName="registry-server" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.387905 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5daa6b49-38c7-4ec9-995f-3c791505597e" containerName="registry-server" Jan 23 06:49:26 crc kubenswrapper[4937]: E0123 06:49:26.387920 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d288a541-a62d-431f-849f-cde2fb0e63ec" containerName="registry-server" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.387929 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d288a541-a62d-431f-849f-cde2fb0e63ec" containerName="registry-server" Jan 23 06:49:26 crc kubenswrapper[4937]: E0123 06:49:26.387940 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5daa6b49-38c7-4ec9-995f-3c791505597e" containerName="extract-utilities" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.387947 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5daa6b49-38c7-4ec9-995f-3c791505597e" containerName="extract-utilities" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.388080 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="5daa6b49-38c7-4ec9-995f-3c791505597e" containerName="registry-server" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.388092 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="d288a541-a62d-431f-849f-cde2fb0e63ec" containerName="registry-server" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.388560 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-qkqbb" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.391914 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-xcq49" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.400867 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-qkqbb"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.405192 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-k77l4"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.405974 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-k77l4" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.407854 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-rgdzr" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.422404 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-k77l4"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.433065 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-gtfpn"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.433912 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gtfpn" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.438193 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-jpnng" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.439236 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-4k4vj"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.439993 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-4k4vj" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.444621 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-gtfpn"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.450093 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-pjl2c" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.461332 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-4k4vj"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.474520 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-5l6fx"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.475357 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5l6fx" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.481163 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-6jbcc" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.487771 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-5l6fx"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.498307 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhgr2\" (UniqueName: \"kubernetes.io/projected/166795a6-99d8-4030-89eb-7bdef35519dc-kube-api-access-lhgr2\") pod \"glance-operator-controller-manager-78fdd796fd-4k4vj\" (UID: \"166795a6-99d8-4030-89eb-7bdef35519dc\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-4k4vj" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.498360 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m5ch\" (UniqueName: \"kubernetes.io/projected/62aa0d56-f0fe-4cc7-a5dd-b15b7471844d-kube-api-access-9m5ch\") pod \"barbican-operator-controller-manager-7f86f8796f-qkqbb\" (UID: \"62aa0d56-f0fe-4cc7-a5dd-b15b7471844d\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-qkqbb" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.498386 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txvx2\" (UniqueName: \"kubernetes.io/projected/a0de6431-d5d9-46ec-a7bf-b4c3c999ba22-kube-api-access-txvx2\") pod \"cinder-operator-controller-manager-69cf5d4557-k77l4\" (UID: \"a0de6431-d5d9-46ec-a7bf-b4c3c999ba22\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-k77l4" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.498449 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5zml\" (UniqueName: \"kubernetes.io/projected/a97495b6-7a9f-454e-8197-af75abec2f3e-kube-api-access-f5zml\") pod \"designate-operator-controller-manager-b45d7bf98-gtfpn\" (UID: \"a97495b6-7a9f-454e-8197-af75abec2f3e\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gtfpn" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.504664 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hfslp"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.505803 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hfslp" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.509089 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hfslp"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.509096 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-7dnj6" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.521525 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-c5c4f"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.522418 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-c5c4f" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.526955 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-dgqld" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.546946 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.547662 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.551140 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.551656 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4xlpl" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.569852 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-5bt2v"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.570699 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-5bt2v" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.577328 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-26wlw" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.578480 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-c5c4f"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.594729 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.603216 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-5bt2v"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.604029 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m5ch\" (UniqueName: \"kubernetes.io/projected/62aa0d56-f0fe-4cc7-a5dd-b15b7471844d-kube-api-access-9m5ch\") pod \"barbican-operator-controller-manager-7f86f8796f-qkqbb\" (UID: \"62aa0d56-f0fe-4cc7-a5dd-b15b7471844d\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-qkqbb" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.604137 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txvx2\" (UniqueName: \"kubernetes.io/projected/a0de6431-d5d9-46ec-a7bf-b4c3c999ba22-kube-api-access-txvx2\") pod \"cinder-operator-controller-manager-69cf5d4557-k77l4\" (UID: \"a0de6431-d5d9-46ec-a7bf-b4c3c999ba22\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-k77l4" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.604228 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96cnc\" (UniqueName: \"kubernetes.io/projected/6c765e89-13b6-4588-a1a4-697b5553bdd0-kube-api-access-96cnc\") pod \"infra-operator-controller-manager-58749ffdfb-9kphf\" (UID: \"6c765e89-13b6-4588-a1a4-697b5553bdd0\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.604342 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6tlc\" (UniqueName: \"kubernetes.io/projected/7e434a74-d86f-4d68-867f-bad41bdf53b5-kube-api-access-n6tlc\") pod \"horizon-operator-controller-manager-77d5c5b54f-hfslp\" (UID: \"7e434a74-d86f-4d68-867f-bad41bdf53b5\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hfslp" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.604452 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm77c\" (UniqueName: \"kubernetes.io/projected/b6e325d5-7535-41a5-a2a8-3f01fb4b8c0e-kube-api-access-xm77c\") pod \"ironic-operator-controller-manager-598f7747c9-c5c4f\" (UID: \"b6e325d5-7535-41a5-a2a8-3f01fb4b8c0e\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-c5c4f" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.604529 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5zml\" (UniqueName: \"kubernetes.io/projected/a97495b6-7a9f-454e-8197-af75abec2f3e-kube-api-access-f5zml\") pod \"designate-operator-controller-manager-b45d7bf98-gtfpn\" (UID: \"a97495b6-7a9f-454e-8197-af75abec2f3e\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gtfpn" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.610692 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zrpw\" (UniqueName: \"kubernetes.io/projected/e1351960-51ec-4735-9019-d267f29568d5-kube-api-access-6zrpw\") pod \"heat-operator-controller-manager-594c8c9d5d-5l6fx\" (UID: \"e1351960-51ec-4735-9019-d267f29568d5\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5l6fx" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.610902 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c765e89-13b6-4588-a1a4-697b5553bdd0-cert\") pod \"infra-operator-controller-manager-58749ffdfb-9kphf\" (UID: \"6c765e89-13b6-4588-a1a4-697b5553bdd0\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.610988 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhgr2\" (UniqueName: \"kubernetes.io/projected/166795a6-99d8-4030-89eb-7bdef35519dc-kube-api-access-lhgr2\") pod \"glance-operator-controller-manager-78fdd796fd-4k4vj\" (UID: \"166795a6-99d8-4030-89eb-7bdef35519dc\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-4k4vj" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.651241 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-xrgdc"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.652820 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xrgdc" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.655386 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-kf2sj" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.665489 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txvx2\" (UniqueName: \"kubernetes.io/projected/a0de6431-d5d9-46ec-a7bf-b4c3c999ba22-kube-api-access-txvx2\") pod \"cinder-operator-controller-manager-69cf5d4557-k77l4\" (UID: \"a0de6431-d5d9-46ec-a7bf-b4c3c999ba22\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-k77l4" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.676515 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m5ch\" (UniqueName: \"kubernetes.io/projected/62aa0d56-f0fe-4cc7-a5dd-b15b7471844d-kube-api-access-9m5ch\") pod \"barbican-operator-controller-manager-7f86f8796f-qkqbb\" (UID: \"62aa0d56-f0fe-4cc7-a5dd-b15b7471844d\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-qkqbb" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.681433 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5zml\" (UniqueName: \"kubernetes.io/projected/a97495b6-7a9f-454e-8197-af75abec2f3e-kube-api-access-f5zml\") pod \"designate-operator-controller-manager-b45d7bf98-gtfpn\" (UID: \"a97495b6-7a9f-454e-8197-af75abec2f3e\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gtfpn" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.682091 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhgr2\" (UniqueName: \"kubernetes.io/projected/166795a6-99d8-4030-89eb-7bdef35519dc-kube-api-access-lhgr2\") pod \"glance-operator-controller-manager-78fdd796fd-4k4vj\" (UID: \"166795a6-99d8-4030-89eb-7bdef35519dc\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-4k4vj" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.688176 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-xrgdc"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.705351 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-qkqbb" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.706064 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2rqc7"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.707143 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2rqc7" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.715060 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-4qmc8" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.715912 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zrpw\" (UniqueName: \"kubernetes.io/projected/e1351960-51ec-4735-9019-d267f29568d5-kube-api-access-6zrpw\") pod \"heat-operator-controller-manager-594c8c9d5d-5l6fx\" (UID: \"e1351960-51ec-4735-9019-d267f29568d5\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5l6fx" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.716040 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgwmv\" (UniqueName: \"kubernetes.io/projected/f255c056-65ce-42fc-9eb6-29395dcde9a3-kube-api-access-jgwmv\") pod \"keystone-operator-controller-manager-b8b6d4659-5bt2v\" (UID: \"f255c056-65ce-42fc-9eb6-29395dcde9a3\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-5bt2v" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.716137 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c765e89-13b6-4588-a1a4-697b5553bdd0-cert\") pod \"infra-operator-controller-manager-58749ffdfb-9kphf\" (UID: \"6c765e89-13b6-4588-a1a4-697b5553bdd0\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.716256 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96cnc\" (UniqueName: \"kubernetes.io/projected/6c765e89-13b6-4588-a1a4-697b5553bdd0-kube-api-access-96cnc\") pod \"infra-operator-controller-manager-58749ffdfb-9kphf\" (UID: \"6c765e89-13b6-4588-a1a4-697b5553bdd0\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.716361 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpvwn\" (UniqueName: \"kubernetes.io/projected/7d322eb6-3116-4a70-8845-e62977302d86-kube-api-access-vpvwn\") pod \"manila-operator-controller-manager-78c6999f6f-xrgdc\" (UID: \"7d322eb6-3116-4a70-8845-e62977302d86\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xrgdc" Jan 23 06:49:26 crc kubenswrapper[4937]: E0123 06:49:26.716270 4937 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 06:49:26 crc kubenswrapper[4937]: E0123 06:49:26.716486 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c765e89-13b6-4588-a1a4-697b5553bdd0-cert podName:6c765e89-13b6-4588-a1a4-697b5553bdd0 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:27.21647168 +0000 UTC m=+967.020238333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c765e89-13b6-4588-a1a4-697b5553bdd0-cert") pod "infra-operator-controller-manager-58749ffdfb-9kphf" (UID: "6c765e89-13b6-4588-a1a4-697b5553bdd0") : secret "infra-operator-webhook-server-cert" not found Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.716443 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6tlc\" (UniqueName: \"kubernetes.io/projected/7e434a74-d86f-4d68-867f-bad41bdf53b5-kube-api-access-n6tlc\") pod \"horizon-operator-controller-manager-77d5c5b54f-hfslp\" (UID: \"7e434a74-d86f-4d68-867f-bad41bdf53b5\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hfslp" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.716741 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm77c\" (UniqueName: \"kubernetes.io/projected/b6e325d5-7535-41a5-a2a8-3f01fb4b8c0e-kube-api-access-xm77c\") pod \"ironic-operator-controller-manager-598f7747c9-c5c4f\" (UID: \"b6e325d5-7535-41a5-a2a8-3f01fb4b8c0e\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-c5c4f" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.720399 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-k77l4" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.739200 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2rqc7"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.743820 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96cnc\" (UniqueName: \"kubernetes.io/projected/6c765e89-13b6-4588-a1a4-697b5553bdd0-kube-api-access-96cnc\") pod \"infra-operator-controller-manager-58749ffdfb-9kphf\" (UID: \"6c765e89-13b6-4588-a1a4-697b5553bdd0\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.745096 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-sd8vq"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.746072 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sd8vq" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.747945 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-9qfwr" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.748518 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm77c\" (UniqueName: \"kubernetes.io/projected/b6e325d5-7535-41a5-a2a8-3f01fb4b8c0e-kube-api-access-xm77c\") pod \"ironic-operator-controller-manager-598f7747c9-c5c4f\" (UID: \"b6e325d5-7535-41a5-a2a8-3f01fb4b8c0e\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-c5c4f" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.754910 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gtfpn" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.756235 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6tlc\" (UniqueName: \"kubernetes.io/projected/7e434a74-d86f-4d68-867f-bad41bdf53b5-kube-api-access-n6tlc\") pod \"horizon-operator-controller-manager-77d5c5b54f-hfslp\" (UID: \"7e434a74-d86f-4d68-867f-bad41bdf53b5\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hfslp" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.768027 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-4k4vj" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.769230 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-wmhnh"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.772499 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-wmhnh" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.773166 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zrpw\" (UniqueName: \"kubernetes.io/projected/e1351960-51ec-4735-9019-d267f29568d5-kube-api-access-6zrpw\") pod \"heat-operator-controller-manager-594c8c9d5d-5l6fx\" (UID: \"e1351960-51ec-4735-9019-d267f29568d5\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5l6fx" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.775205 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-h6gr8" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.783434 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-sd8vq"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.793329 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5l6fx" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.794842 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-wmhnh"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.800751 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-9xwkd"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.801532 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-9xwkd" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.806198 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-rg44s" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.811601 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-9xwkd"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.817699 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-pxrfl"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.818428 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56pjj\" (UniqueName: \"kubernetes.io/projected/77fd44ee-ceab-4595-890e-7310ec8b6cb2-kube-api-access-56pjj\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-2rqc7\" (UID: \"77fd44ee-ceab-4595-890e-7310ec8b6cb2\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2rqc7" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.818461 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl6zz\" (UniqueName: \"kubernetes.io/projected/f1cca66c-d0b5-488a-a3d4-9e0b1714c33c-kube-api-access-rl6zz\") pod \"neutron-operator-controller-manager-78d58447c5-sd8vq\" (UID: \"f1cca66c-d0b5-488a-a3d4-9e0b1714c33c\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sd8vq" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.818484 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjdxp\" (UniqueName: \"kubernetes.io/projected/9d773aa4-667d-431e-b63f-0f4c45d22d58-kube-api-access-fjdxp\") pod \"nova-operator-controller-manager-6b8bc8d87d-wmhnh\" (UID: \"9d773aa4-667d-431e-b63f-0f4c45d22d58\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-wmhnh" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.818511 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgwmv\" (UniqueName: \"kubernetes.io/projected/f255c056-65ce-42fc-9eb6-29395dcde9a3-kube-api-access-jgwmv\") pod \"keystone-operator-controller-manager-b8b6d4659-5bt2v\" (UID: \"f255c056-65ce-42fc-9eb6-29395dcde9a3\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-5bt2v" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.818582 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpvwn\" (UniqueName: \"kubernetes.io/projected/7d322eb6-3116-4a70-8845-e62977302d86-kube-api-access-vpvwn\") pod \"manila-operator-controller-manager-78c6999f6f-xrgdc\" (UID: \"7d322eb6-3116-4a70-8845-e62977302d86\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xrgdc" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.818725 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-pxrfl" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.822735 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-4cdbv"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.823501 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4cdbv" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.826765 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-vghzk" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.827020 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-77lzd" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.833721 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-pxrfl"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.845812 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hfslp" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.846463 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-4cdbv"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.852078 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpvwn\" (UniqueName: \"kubernetes.io/projected/7d322eb6-3116-4a70-8845-e62977302d86-kube-api-access-vpvwn\") pod \"manila-operator-controller-manager-78c6999f6f-xrgdc\" (UID: \"7d322eb6-3116-4a70-8845-e62977302d86\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xrgdc" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.853521 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgwmv\" (UniqueName: \"kubernetes.io/projected/f255c056-65ce-42fc-9eb6-29395dcde9a3-kube-api-access-jgwmv\") pod \"keystone-operator-controller-manager-b8b6d4659-5bt2v\" (UID: \"f255c056-65ce-42fc-9eb6-29395dcde9a3\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-5bt2v" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.865372 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.866813 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-c5c4f" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.869361 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-qwg2k"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.869963 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qwg2k" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.870524 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.874001 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.874665 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-h4wsl" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.875919 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-5t4rx" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.895537 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-5bt2v" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.896707 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-qwg2k"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.907179 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.920391 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdcrf\" (UniqueName: \"kubernetes.io/projected/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-kube-api-access-mdcrf\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k\" (UID: \"13a2ad28-4ba2-4470-8e0d-ca42de8e6653\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.920480 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k\" (UID: \"13a2ad28-4ba2-4470-8e0d-ca42de8e6653\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.920516 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56pjj\" (UniqueName: \"kubernetes.io/projected/77fd44ee-ceab-4595-890e-7310ec8b6cb2-kube-api-access-56pjj\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-2rqc7\" (UID: \"77fd44ee-ceab-4595-890e-7310ec8b6cb2\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2rqc7" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.920545 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl6zz\" (UniqueName: \"kubernetes.io/projected/f1cca66c-d0b5-488a-a3d4-9e0b1714c33c-kube-api-access-rl6zz\") pod \"neutron-operator-controller-manager-78d58447c5-sd8vq\" (UID: \"f1cca66c-d0b5-488a-a3d4-9e0b1714c33c\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sd8vq" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.920576 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjdxp\" (UniqueName: \"kubernetes.io/projected/9d773aa4-667d-431e-b63f-0f4c45d22d58-kube-api-access-fjdxp\") pod \"nova-operator-controller-manager-6b8bc8d87d-wmhnh\" (UID: \"9d773aa4-667d-431e-b63f-0f4c45d22d58\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-wmhnh" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.920620 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xsdh\" (UniqueName: \"kubernetes.io/projected/c3c7df6a-2e6c-4a57-b7b6-ed070b4eeb3a-kube-api-access-2xsdh\") pod \"swift-operator-controller-manager-547cbdb99f-qwg2k\" (UID: \"c3c7df6a-2e6c-4a57-b7b6-ed070b4eeb3a\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qwg2k" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.920708 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqcxk\" (UniqueName: \"kubernetes.io/projected/7f7029d6-79d1-4698-91ca-bc61d66124ab-kube-api-access-wqcxk\") pod \"placement-operator-controller-manager-5d646b7d76-4cdbv\" (UID: \"7f7029d6-79d1-4698-91ca-bc61d66124ab\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4cdbv" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.920742 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnn4t\" (UniqueName: \"kubernetes.io/projected/5ba7dbd8-68af-4677-bd8e-686c19912769-kube-api-access-gnn4t\") pod \"ovn-operator-controller-manager-55db956ddc-pxrfl\" (UID: \"5ba7dbd8-68af-4677-bd8e-686c19912769\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-pxrfl" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.920771 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7wzr\" (UniqueName: \"kubernetes.io/projected/c8fbb575-36e1-452e-8800-9b310540b205-kube-api-access-z7wzr\") pod \"octavia-operator-controller-manager-7bd9774b6-9xwkd\" (UID: \"c8fbb575-36e1-452e-8800-9b310540b205\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-9xwkd" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.931000 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-99bb7"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.932276 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-99bb7" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.935976 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-99bb7"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.939823 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-xh7m6" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.942429 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl6zz\" (UniqueName: \"kubernetes.io/projected/f1cca66c-d0b5-488a-a3d4-9e0b1714c33c-kube-api-access-rl6zz\") pod \"neutron-operator-controller-manager-78d58447c5-sd8vq\" (UID: \"f1cca66c-d0b5-488a-a3d4-9e0b1714c33c\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sd8vq" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.943373 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56pjj\" (UniqueName: \"kubernetes.io/projected/77fd44ee-ceab-4595-890e-7310ec8b6cb2-kube-api-access-56pjj\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-2rqc7\" (UID: \"77fd44ee-ceab-4595-890e-7310ec8b6cb2\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2rqc7" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.947407 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjdxp\" (UniqueName: \"kubernetes.io/projected/9d773aa4-667d-431e-b63f-0f4c45d22d58-kube-api-access-fjdxp\") pod \"nova-operator-controller-manager-6b8bc8d87d-wmhnh\" (UID: \"9d773aa4-667d-431e-b63f-0f4c45d22d58\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-wmhnh" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.967271 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-9q4dg"] Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.971393 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9q4dg" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.974396 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-mckwq" Jan 23 06:49:26 crc kubenswrapper[4937]: I0123 06:49:26.989168 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-9q4dg"] Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.020094 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5d9cd495bb-kw22f"] Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.021035 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5d9cd495bb-kw22f" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.034854 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9j8xn" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.061481 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5d9cd495bb-kw22f"] Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.087471 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdcrf\" (UniqueName: \"kubernetes.io/projected/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-kube-api-access-mdcrf\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k\" (UID: \"13a2ad28-4ba2-4470-8e0d-ca42de8e6653\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.087547 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k\" (UID: \"13a2ad28-4ba2-4470-8e0d-ca42de8e6653\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.087623 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xsdh\" (UniqueName: \"kubernetes.io/projected/c3c7df6a-2e6c-4a57-b7b6-ed070b4eeb3a-kube-api-access-2xsdh\") pod \"swift-operator-controller-manager-547cbdb99f-qwg2k\" (UID: \"c3c7df6a-2e6c-4a57-b7b6-ed070b4eeb3a\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qwg2k" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.087722 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqcxk\" (UniqueName: \"kubernetes.io/projected/7f7029d6-79d1-4698-91ca-bc61d66124ab-kube-api-access-wqcxk\") pod \"placement-operator-controller-manager-5d646b7d76-4cdbv\" (UID: \"7f7029d6-79d1-4698-91ca-bc61d66124ab\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4cdbv" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.087760 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnn4t\" (UniqueName: \"kubernetes.io/projected/5ba7dbd8-68af-4677-bd8e-686c19912769-kube-api-access-gnn4t\") pod \"ovn-operator-controller-manager-55db956ddc-pxrfl\" (UID: \"5ba7dbd8-68af-4677-bd8e-686c19912769\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-pxrfl" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.087802 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7wzr\" (UniqueName: \"kubernetes.io/projected/c8fbb575-36e1-452e-8800-9b310540b205-kube-api-access-z7wzr\") pod \"octavia-operator-controller-manager-7bd9774b6-9xwkd\" (UID: \"c8fbb575-36e1-452e-8800-9b310540b205\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-9xwkd" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.087829 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqd97\" (UniqueName: \"kubernetes.io/projected/17b47330-4556-499e-83dd-7e67a9a73824-kube-api-access-jqd97\") pod \"test-operator-controller-manager-69797bbcbd-9q4dg\" (UID: \"17b47330-4556-499e-83dd-7e67a9a73824\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9q4dg" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.087875 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frjb8\" (UniqueName: \"kubernetes.io/projected/75724f78-fc93-46a1-bfb2-037fe76b1edd-kube-api-access-frjb8\") pod \"telemetry-operator-controller-manager-85cd9769bb-99bb7\" (UID: \"75724f78-fc93-46a1-bfb2-037fe76b1edd\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-99bb7" Jan 23 06:49:27 crc kubenswrapper[4937]: E0123 06:49:27.087934 4937 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:49:27 crc kubenswrapper[4937]: E0123 06:49:27.088804 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-cert podName:13a2ad28-4ba2-4470-8e0d-ca42de8e6653 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:27.588781987 +0000 UTC m=+967.392548640 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" (UID: "13a2ad28-4ba2-4470-8e0d-ca42de8e6653") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.124278 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xrgdc" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.124444 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2rqc7" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.127722 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sd8vq" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.135754 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-wmhnh" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.139229 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl"] Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.140667 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.142760 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.143260 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.145290 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqcxk\" (UniqueName: \"kubernetes.io/projected/7f7029d6-79d1-4698-91ca-bc61d66124ab-kube-api-access-wqcxk\") pod \"placement-operator-controller-manager-5d646b7d76-4cdbv\" (UID: \"7f7029d6-79d1-4698-91ca-bc61d66124ab\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4cdbv" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.147990 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnn4t\" (UniqueName: \"kubernetes.io/projected/5ba7dbd8-68af-4677-bd8e-686c19912769-kube-api-access-gnn4t\") pod \"ovn-operator-controller-manager-55db956ddc-pxrfl\" (UID: \"5ba7dbd8-68af-4677-bd8e-686c19912769\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-pxrfl" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.149157 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdcrf\" (UniqueName: \"kubernetes.io/projected/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-kube-api-access-mdcrf\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k\" (UID: \"13a2ad28-4ba2-4470-8e0d-ca42de8e6653\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.155081 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-s7z7r" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.155513 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7wzr\" (UniqueName: \"kubernetes.io/projected/c8fbb575-36e1-452e-8800-9b310540b205-kube-api-access-z7wzr\") pod \"octavia-operator-controller-manager-7bd9774b6-9xwkd\" (UID: \"c8fbb575-36e1-452e-8800-9b310540b205\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-9xwkd" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.169468 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl"] Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.172684 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xsdh\" (UniqueName: \"kubernetes.io/projected/c3c7df6a-2e6c-4a57-b7b6-ed070b4eeb3a-kube-api-access-2xsdh\") pod \"swift-operator-controller-manager-547cbdb99f-qwg2k\" (UID: \"c3c7df6a-2e6c-4a57-b7b6-ed070b4eeb3a\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qwg2k" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.173713 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-pxrfl" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.183193 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4cdbv" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.189096 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqd97\" (UniqueName: \"kubernetes.io/projected/17b47330-4556-499e-83dd-7e67a9a73824-kube-api-access-jqd97\") pod \"test-operator-controller-manager-69797bbcbd-9q4dg\" (UID: \"17b47330-4556-499e-83dd-7e67a9a73824\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9q4dg" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.189155 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frjb8\" (UniqueName: \"kubernetes.io/projected/75724f78-fc93-46a1-bfb2-037fe76b1edd-kube-api-access-frjb8\") pod \"telemetry-operator-controller-manager-85cd9769bb-99bb7\" (UID: \"75724f78-fc93-46a1-bfb2-037fe76b1edd\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-99bb7" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.189222 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdgbd\" (UniqueName: \"kubernetes.io/projected/23892b9d-9ef2-4c33-aaa0-1c858cd9255d-kube-api-access-sdgbd\") pod \"watcher-operator-controller-manager-5d9cd495bb-kw22f\" (UID: \"23892b9d-9ef2-4c33-aaa0-1c858cd9255d\") " pod="openstack-operators/watcher-operator-controller-manager-5d9cd495bb-kw22f" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.200792 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qwg2k" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.221115 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7qvkg"] Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.222075 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7qvkg" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.222757 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frjb8\" (UniqueName: \"kubernetes.io/projected/75724f78-fc93-46a1-bfb2-037fe76b1edd-kube-api-access-frjb8\") pod \"telemetry-operator-controller-manager-85cd9769bb-99bb7\" (UID: \"75724f78-fc93-46a1-bfb2-037fe76b1edd\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-99bb7" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.224312 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqd97\" (UniqueName: \"kubernetes.io/projected/17b47330-4556-499e-83dd-7e67a9a73824-kube-api-access-jqd97\") pod \"test-operator-controller-manager-69797bbcbd-9q4dg\" (UID: \"17b47330-4556-499e-83dd-7e67a9a73824\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9q4dg" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.230039 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-wnzt4" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.241465 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7qvkg"] Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.267091 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-99bb7" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.297503 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b7kk\" (UniqueName: \"kubernetes.io/projected/a58eade1-a27e-42ed-8a1b-f43803d53498-kube-api-access-2b7kk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7qvkg\" (UID: \"a58eade1-a27e-42ed-8a1b-f43803d53498\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7qvkg" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.297543 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-metrics-certs\") pod \"openstack-operator-controller-manager-8599c9cdcc-j97fl\" (UID: \"a7311b13-72db-4f12-9617-039ee018dee7\") " pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.297561 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-webhook-certs\") pod \"openstack-operator-controller-manager-8599c9cdcc-j97fl\" (UID: \"a7311b13-72db-4f12-9617-039ee018dee7\") " pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.297611 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdgbd\" (UniqueName: \"kubernetes.io/projected/23892b9d-9ef2-4c33-aaa0-1c858cd9255d-kube-api-access-sdgbd\") pod \"watcher-operator-controller-manager-5d9cd495bb-kw22f\" (UID: \"23892b9d-9ef2-4c33-aaa0-1c858cd9255d\") " pod="openstack-operators/watcher-operator-controller-manager-5d9cd495bb-kw22f" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.297646 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcwxj\" (UniqueName: \"kubernetes.io/projected/a7311b13-72db-4f12-9617-039ee018dee7-kube-api-access-pcwxj\") pod \"openstack-operator-controller-manager-8599c9cdcc-j97fl\" (UID: \"a7311b13-72db-4f12-9617-039ee018dee7\") " pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.297680 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c765e89-13b6-4588-a1a4-697b5553bdd0-cert\") pod \"infra-operator-controller-manager-58749ffdfb-9kphf\" (UID: \"6c765e89-13b6-4588-a1a4-697b5553bdd0\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf" Jan 23 06:49:27 crc kubenswrapper[4937]: E0123 06:49:27.297811 4937 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 06:49:27 crc kubenswrapper[4937]: E0123 06:49:27.297856 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c765e89-13b6-4588-a1a4-697b5553bdd0-cert podName:6c765e89-13b6-4588-a1a4-697b5553bdd0 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:28.29784075 +0000 UTC m=+968.101607403 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c765e89-13b6-4588-a1a4-697b5553bdd0-cert") pod "infra-operator-controller-manager-58749ffdfb-9kphf" (UID: "6c765e89-13b6-4588-a1a4-697b5553bdd0") : secret "infra-operator-webhook-server-cert" not found Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.327120 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-k77l4"] Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.328760 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdgbd\" (UniqueName: \"kubernetes.io/projected/23892b9d-9ef2-4c33-aaa0-1c858cd9255d-kube-api-access-sdgbd\") pod \"watcher-operator-controller-manager-5d9cd495bb-kw22f\" (UID: \"23892b9d-9ef2-4c33-aaa0-1c858cd9255d\") " pod="openstack-operators/watcher-operator-controller-manager-5d9cd495bb-kw22f" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.337476 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9q4dg" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.391883 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5d9cd495bb-kw22f" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.400609 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcwxj\" (UniqueName: \"kubernetes.io/projected/a7311b13-72db-4f12-9617-039ee018dee7-kube-api-access-pcwxj\") pod \"openstack-operator-controller-manager-8599c9cdcc-j97fl\" (UID: \"a7311b13-72db-4f12-9617-039ee018dee7\") " pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.400763 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b7kk\" (UniqueName: \"kubernetes.io/projected/a58eade1-a27e-42ed-8a1b-f43803d53498-kube-api-access-2b7kk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7qvkg\" (UID: \"a58eade1-a27e-42ed-8a1b-f43803d53498\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7qvkg" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.400798 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-metrics-certs\") pod \"openstack-operator-controller-manager-8599c9cdcc-j97fl\" (UID: \"a7311b13-72db-4f12-9617-039ee018dee7\") " pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.400822 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-webhook-certs\") pod \"openstack-operator-controller-manager-8599c9cdcc-j97fl\" (UID: \"a7311b13-72db-4f12-9617-039ee018dee7\") " pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:27 crc kubenswrapper[4937]: E0123 06:49:27.400976 4937 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 06:49:27 crc kubenswrapper[4937]: E0123 06:49:27.401035 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-webhook-certs podName:a7311b13-72db-4f12-9617-039ee018dee7 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:27.901015746 +0000 UTC m=+967.704782399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-webhook-certs") pod "openstack-operator-controller-manager-8599c9cdcc-j97fl" (UID: "a7311b13-72db-4f12-9617-039ee018dee7") : secret "webhook-server-cert" not found Jan 23 06:49:27 crc kubenswrapper[4937]: E0123 06:49:27.404498 4937 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 06:49:27 crc kubenswrapper[4937]: E0123 06:49:27.404560 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-metrics-certs podName:a7311b13-72db-4f12-9617-039ee018dee7 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:27.904544627 +0000 UTC m=+967.708311280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-metrics-certs") pod "openstack-operator-controller-manager-8599c9cdcc-j97fl" (UID: "a7311b13-72db-4f12-9617-039ee018dee7") : secret "metrics-server-cert" not found Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.419156 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-qkqbb"] Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.430370 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcwxj\" (UniqueName: \"kubernetes.io/projected/a7311b13-72db-4f12-9617-039ee018dee7-kube-api-access-pcwxj\") pod \"openstack-operator-controller-manager-8599c9cdcc-j97fl\" (UID: \"a7311b13-72db-4f12-9617-039ee018dee7\") " pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.431625 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b7kk\" (UniqueName: \"kubernetes.io/projected/a58eade1-a27e-42ed-8a1b-f43803d53498-kube-api-access-2b7kk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7qvkg\" (UID: \"a58eade1-a27e-42ed-8a1b-f43803d53498\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7qvkg" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.452964 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-9xwkd" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.461076 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-gtfpn"] Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.565676 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7qvkg" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.570789 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-4k4vj"] Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.662989 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k\" (UID: \"13a2ad28-4ba2-4470-8e0d-ca42de8e6653\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" Jan 23 06:49:27 crc kubenswrapper[4937]: E0123 06:49:27.663945 4937 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:49:27 crc kubenswrapper[4937]: E0123 06:49:27.663988 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-cert podName:13a2ad28-4ba2-4470-8e0d-ca42de8e6653 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:28.663973397 +0000 UTC m=+968.467740050 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" (UID: "13a2ad28-4ba2-4470-8e0d-ca42de8e6653") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.753380 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-5l6fx"] Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.969321 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-metrics-certs\") pod \"openstack-operator-controller-manager-8599c9cdcc-j97fl\" (UID: \"a7311b13-72db-4f12-9617-039ee018dee7\") " pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:27 crc kubenswrapper[4937]: I0123 06:49:27.969371 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-webhook-certs\") pod \"openstack-operator-controller-manager-8599c9cdcc-j97fl\" (UID: \"a7311b13-72db-4f12-9617-039ee018dee7\") " pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:27 crc kubenswrapper[4937]: E0123 06:49:27.969644 4937 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 06:49:27 crc kubenswrapper[4937]: E0123 06:49:27.969703 4937 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 06:49:27 crc kubenswrapper[4937]: E0123 06:49:27.969718 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-webhook-certs podName:a7311b13-72db-4f12-9617-039ee018dee7 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:28.969700049 +0000 UTC m=+968.773466702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-webhook-certs") pod "openstack-operator-controller-manager-8599c9cdcc-j97fl" (UID: "a7311b13-72db-4f12-9617-039ee018dee7") : secret "webhook-server-cert" not found Jan 23 06:49:27 crc kubenswrapper[4937]: E0123 06:49:27.969738 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-metrics-certs podName:a7311b13-72db-4f12-9617-039ee018dee7 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:28.96972802 +0000 UTC m=+968.773494673 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-metrics-certs") pod "openstack-operator-controller-manager-8599c9cdcc-j97fl" (UID: "a7311b13-72db-4f12-9617-039ee018dee7") : secret "metrics-server-cert" not found Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.019784 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-4k4vj" event={"ID":"166795a6-99d8-4030-89eb-7bdef35519dc","Type":"ContainerStarted","Data":"3829f5a58d5357e5ec8101627ecdf411a2004ca2a18eb533051f38f3d0be767b"} Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.022639 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-k77l4" event={"ID":"a0de6431-d5d9-46ec-a7bf-b4c3c999ba22","Type":"ContainerStarted","Data":"d0fb4892419522768f11e51560fe33e15b314745d7ffb7d43b02534e3dd1efce"} Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.026711 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gtfpn" event={"ID":"a97495b6-7a9f-454e-8197-af75abec2f3e","Type":"ContainerStarted","Data":"e1e5790e6f2b6636e2ced48927ae357f3878add298e77ac61c86a7a31b9c15cc"} Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.029849 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-qkqbb" event={"ID":"62aa0d56-f0fe-4cc7-a5dd-b15b7471844d","Type":"ContainerStarted","Data":"54f7173078ff1912b5c6723e81ff9cdcbce5f406529f4d9e3322967f0274dc09"} Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.036506 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5l6fx" event={"ID":"e1351960-51ec-4735-9019-d267f29568d5","Type":"ContainerStarted","Data":"930bc1fd210065bf91158b28cec26dbfe9e7285003e41732fe2ea19492f8b611"} Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.072062 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hfslp"] Jan 23 06:49:28 crc kubenswrapper[4937]: W0123 06:49:28.081341 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e434a74_d86f_4d68_867f_bad41bdf53b5.slice/crio-97ef21744bab4a53de2dd5bb10cbf10b6413edd692c851814395535ed2ae3a61 WatchSource:0}: Error finding container 97ef21744bab4a53de2dd5bb10cbf10b6413edd692c851814395535ed2ae3a61: Status 404 returned error can't find the container with id 97ef21744bab4a53de2dd5bb10cbf10b6413edd692c851814395535ed2ae3a61 Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.087741 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-5bt2v"] Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.106450 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-c5c4f"] Jan 23 06:49:28 crc kubenswrapper[4937]: W0123 06:49:28.123104 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf255c056_65ce_42fc_9eb6_29395dcde9a3.slice/crio-ad133bd08d22df0553bff0f53116f1f3d7c2f98823bd730a3b8297078a3f64fe WatchSource:0}: Error finding container ad133bd08d22df0553bff0f53116f1f3d7c2f98823bd730a3b8297078a3f64fe: Status 404 returned error can't find the container with id ad133bd08d22df0553bff0f53116f1f3d7c2f98823bd730a3b8297078a3f64fe Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.191432 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-sd8vq"] Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.197359 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-xrgdc"] Jan 23 06:49:28 crc kubenswrapper[4937]: W0123 06:49:28.211850 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d322eb6_3116_4a70_8845_e62977302d86.slice/crio-07d33cf1c96d88adaf20ff1ec70f403e551b27b47f084615028f0ecc69c739e7 WatchSource:0}: Error finding container 07d33cf1c96d88adaf20ff1ec70f403e551b27b47f084615028f0ecc69c739e7: Status 404 returned error can't find the container with id 07d33cf1c96d88adaf20ff1ec70f403e551b27b47f084615028f0ecc69c739e7 Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.346163 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-99bb7"] Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.354842 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-wmhnh"] Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.361832 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-qwg2k"] Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.369429 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-pxrfl"] Jan 23 06:49:28 crc kubenswrapper[4937]: W0123 06:49:28.371475 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d773aa4_667d_431e_b63f_0f4c45d22d58.slice/crio-008f313fec50e689f471da105347fcd65c311d221def0a42a90700f87f9901a7 WatchSource:0}: Error finding container 008f313fec50e689f471da105347fcd65c311d221def0a42a90700f87f9901a7: Status 404 returned error can't find the container with id 008f313fec50e689f471da105347fcd65c311d221def0a42a90700f87f9901a7 Jan 23 06:49:28 crc kubenswrapper[4937]: W0123 06:49:28.374867 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3c7df6a_2e6c_4a57_b7b6_ed070b4eeb3a.slice/crio-e277d02f804dd2e2f40b53f78e0f9a63ebf0f615eec77b0ee4b4b4da7957885f WatchSource:0}: Error finding container e277d02f804dd2e2f40b53f78e0f9a63ebf0f615eec77b0ee4b4b4da7957885f: Status 404 returned error can't find the container with id e277d02f804dd2e2f40b53f78e0f9a63ebf0f615eec77b0ee4b4b4da7957885f Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.377519 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c765e89-13b6-4588-a1a4-697b5553bdd0-cert\") pod \"infra-operator-controller-manager-58749ffdfb-9kphf\" (UID: \"6c765e89-13b6-4588-a1a4-697b5553bdd0\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf" Jan 23 06:49:28 crc kubenswrapper[4937]: E0123 06:49:28.377817 4937 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 06:49:28 crc kubenswrapper[4937]: E0123 06:49:28.377868 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c765e89-13b6-4588-a1a4-697b5553bdd0-cert podName:6c765e89-13b6-4588-a1a4-697b5553bdd0 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:30.377851429 +0000 UTC m=+970.181618082 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c765e89-13b6-4588-a1a4-697b5553bdd0-cert") pod "infra-operator-controller-manager-58749ffdfb-9kphf" (UID: "6c765e89-13b6-4588-a1a4-697b5553bdd0") : secret "infra-operator-webhook-server-cert" not found Jan 23 06:49:28 crc kubenswrapper[4937]: W0123 06:49:28.380603 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ba7dbd8_68af_4677_bd8e_686c19912769.slice/crio-87bdd8e29a4ec8ba88c24fc4d3f1d91d84724d965446468cd08c8fe8181246da WatchSource:0}: Error finding container 87bdd8e29a4ec8ba88c24fc4d3f1d91d84724d965446468cd08c8fe8181246da: Status 404 returned error can't find the container with id 87bdd8e29a4ec8ba88c24fc4d3f1d91d84724d965446468cd08c8fe8181246da Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.616160 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2rqc7"] Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.637790 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-9q4dg"] Jan 23 06:49:28 crc kubenswrapper[4937]: W0123 06:49:28.642072 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda58eade1_a27e_42ed_8a1b_f43803d53498.slice/crio-00cb2691f0e3983a060303fc70c16c846a684e2e034f444cdef2f9fded047abc WatchSource:0}: Error finding container 00cb2691f0e3983a060303fc70c16c846a684e2e034f444cdef2f9fded047abc: Status 404 returned error can't find the container with id 00cb2691f0e3983a060303fc70c16c846a684e2e034f444cdef2f9fded047abc Jan 23 06:49:28 crc kubenswrapper[4937]: E0123 06:49:28.653540 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jqd97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-9q4dg_openstack-operators(17b47330-4556-499e-83dd-7e67a9a73824): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 06:49:28 crc kubenswrapper[4937]: E0123 06:49:28.655202 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9q4dg" podUID="17b47330-4556-499e-83dd-7e67a9a73824" Jan 23 06:49:28 crc kubenswrapper[4937]: W0123 06:49:28.655636 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f7029d6_79d1_4698_91ca_bc61d66124ab.slice/crio-94bb32321a1bed1c13317f382a0bc9c0f1b1e0d7740451e14270e56260ee49c5 WatchSource:0}: Error finding container 94bb32321a1bed1c13317f382a0bc9c0f1b1e0d7740451e14270e56260ee49c5: Status 404 returned error can't find the container with id 94bb32321a1bed1c13317f382a0bc9c0f1b1e0d7740451e14270e56260ee49c5 Jan 23 06:49:28 crc kubenswrapper[4937]: E0123 06:49:28.661027 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqcxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-4cdbv_openstack-operators(7f7029d6-79d1-4698-91ca-bc61d66124ab): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.663217 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7qvkg"] Jan 23 06:49:28 crc kubenswrapper[4937]: E0123 06:49:28.663258 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4cdbv" podUID="7f7029d6-79d1-4698-91ca-bc61d66124ab" Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.668602 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-4cdbv"] Jan 23 06:49:28 crc kubenswrapper[4937]: E0123 06:49:28.672963 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z7wzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-9xwkd_openstack-operators(c8fbb575-36e1-452e-8800-9b310540b205): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 06:49:28 crc kubenswrapper[4937]: E0123 06:49:28.673119 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.44:5001/openstack-k8s-operators/watcher-operator:66a2a7ca52c97ab09e74ddf1b8f1663bf04650c3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sdgbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5d9cd495bb-kw22f_openstack-operators(23892b9d-9ef2-4c33-aaa0-1c858cd9255d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.673195 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5d9cd495bb-kw22f"] Jan 23 06:49:28 crc kubenswrapper[4937]: E0123 06:49:28.674418 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-9xwkd" podUID="c8fbb575-36e1-452e-8800-9b310540b205" Jan 23 06:49:28 crc kubenswrapper[4937]: E0123 06:49:28.675653 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5d9cd495bb-kw22f" podUID="23892b9d-9ef2-4c33-aaa0-1c858cd9255d" Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.684066 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-9xwkd"] Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.688192 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k\" (UID: \"13a2ad28-4ba2-4470-8e0d-ca42de8e6653\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" Jan 23 06:49:28 crc kubenswrapper[4937]: E0123 06:49:28.688371 4937 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:49:28 crc kubenswrapper[4937]: E0123 06:49:28.688419 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-cert podName:13a2ad28-4ba2-4470-8e0d-ca42de8e6653 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:30.688404625 +0000 UTC m=+970.492171278 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" (UID: "13a2ad28-4ba2-4470-8e0d-ca42de8e6653") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.994288 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-metrics-certs\") pod \"openstack-operator-controller-manager-8599c9cdcc-j97fl\" (UID: \"a7311b13-72db-4f12-9617-039ee018dee7\") " pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:28 crc kubenswrapper[4937]: I0123 06:49:28.994456 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-webhook-certs\") pod \"openstack-operator-controller-manager-8599c9cdcc-j97fl\" (UID: \"a7311b13-72db-4f12-9617-039ee018dee7\") " pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:28 crc kubenswrapper[4937]: E0123 06:49:28.994422 4937 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 06:49:28 crc kubenswrapper[4937]: E0123 06:49:28.994618 4937 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 06:49:28 crc kubenswrapper[4937]: E0123 06:49:28.994549 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-metrics-certs podName:a7311b13-72db-4f12-9617-039ee018dee7 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:30.994535437 +0000 UTC m=+970.798302090 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-metrics-certs") pod "openstack-operator-controller-manager-8599c9cdcc-j97fl" (UID: "a7311b13-72db-4f12-9617-039ee018dee7") : secret "metrics-server-cert" not found Jan 23 06:49:28 crc kubenswrapper[4937]: E0123 06:49:28.994664 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-webhook-certs podName:a7311b13-72db-4f12-9617-039ee018dee7 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:30.99465836 +0000 UTC m=+970.798425013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-webhook-certs") pod "openstack-operator-controller-manager-8599c9cdcc-j97fl" (UID: "a7311b13-72db-4f12-9617-039ee018dee7") : secret "webhook-server-cert" not found Jan 23 06:49:29 crc kubenswrapper[4937]: I0123 06:49:29.048480 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5d9cd495bb-kw22f" event={"ID":"23892b9d-9ef2-4c33-aaa0-1c858cd9255d","Type":"ContainerStarted","Data":"f0493c00fc9ba371c607f8d315bf525e1230e5c7581023824fb536b63e31f31d"} Jan 23 06:49:29 crc kubenswrapper[4937]: E0123 06:49:29.050025 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.44:5001/openstack-k8s-operators/watcher-operator:66a2a7ca52c97ab09e74ddf1b8f1663bf04650c3\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5d9cd495bb-kw22f" podUID="23892b9d-9ef2-4c33-aaa0-1c858cd9255d" Jan 23 06:49:29 crc kubenswrapper[4937]: I0123 06:49:29.051704 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-99bb7" event={"ID":"75724f78-fc93-46a1-bfb2-037fe76b1edd","Type":"ContainerStarted","Data":"40990fccd70c06bc8781489d50b3fab01e997c16d2abb308b11ef604ba792ea5"} Jan 23 06:49:29 crc kubenswrapper[4937]: I0123 06:49:29.055731 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hfslp" event={"ID":"7e434a74-d86f-4d68-867f-bad41bdf53b5","Type":"ContainerStarted","Data":"97ef21744bab4a53de2dd5bb10cbf10b6413edd692c851814395535ed2ae3a61"} Jan 23 06:49:29 crc kubenswrapper[4937]: I0123 06:49:29.068083 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4cdbv" event={"ID":"7f7029d6-79d1-4698-91ca-bc61d66124ab","Type":"ContainerStarted","Data":"94bb32321a1bed1c13317f382a0bc9c0f1b1e0d7740451e14270e56260ee49c5"} Jan 23 06:49:29 crc kubenswrapper[4937]: E0123 06:49:29.069624 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4cdbv" podUID="7f7029d6-79d1-4698-91ca-bc61d66124ab" Jan 23 06:49:29 crc kubenswrapper[4937]: I0123 06:49:29.069999 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2rqc7" event={"ID":"77fd44ee-ceab-4595-890e-7310ec8b6cb2","Type":"ContainerStarted","Data":"c7a90f06cb6fd7fb25529153f6b40760b86aa46f3856a6d1fc277d694ded10d5"} Jan 23 06:49:29 crc kubenswrapper[4937]: I0123 06:49:29.088673 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9q4dg" event={"ID":"17b47330-4556-499e-83dd-7e67a9a73824","Type":"ContainerStarted","Data":"d9ea0fb748792c19b537317dea5fada448ba9b48a1cd9e5eecf80fc1f2ec1a17"} Jan 23 06:49:29 crc kubenswrapper[4937]: E0123 06:49:29.089893 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9q4dg" podUID="17b47330-4556-499e-83dd-7e67a9a73824" Jan 23 06:49:29 crc kubenswrapper[4937]: I0123 06:49:29.090152 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7qvkg" event={"ID":"a58eade1-a27e-42ed-8a1b-f43803d53498","Type":"ContainerStarted","Data":"00cb2691f0e3983a060303fc70c16c846a684e2e034f444cdef2f9fded047abc"} Jan 23 06:49:29 crc kubenswrapper[4937]: I0123 06:49:29.091615 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xrgdc" event={"ID":"7d322eb6-3116-4a70-8845-e62977302d86","Type":"ContainerStarted","Data":"07d33cf1c96d88adaf20ff1ec70f403e551b27b47f084615028f0ecc69c739e7"} Jan 23 06:49:29 crc kubenswrapper[4937]: I0123 06:49:29.092966 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-pxrfl" event={"ID":"5ba7dbd8-68af-4677-bd8e-686c19912769","Type":"ContainerStarted","Data":"87bdd8e29a4ec8ba88c24fc4d3f1d91d84724d965446468cd08c8fe8181246da"} Jan 23 06:49:29 crc kubenswrapper[4937]: I0123 06:49:29.094558 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-wmhnh" event={"ID":"9d773aa4-667d-431e-b63f-0f4c45d22d58","Type":"ContainerStarted","Data":"008f313fec50e689f471da105347fcd65c311d221def0a42a90700f87f9901a7"} Jan 23 06:49:29 crc kubenswrapper[4937]: I0123 06:49:29.105986 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-5bt2v" event={"ID":"f255c056-65ce-42fc-9eb6-29395dcde9a3","Type":"ContainerStarted","Data":"ad133bd08d22df0553bff0f53116f1f3d7c2f98823bd730a3b8297078a3f64fe"} Jan 23 06:49:29 crc kubenswrapper[4937]: I0123 06:49:29.109541 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sd8vq" event={"ID":"f1cca66c-d0b5-488a-a3d4-9e0b1714c33c","Type":"ContainerStarted","Data":"623e51594446358eb5492b9bf17124a25bb093558518849a3c74a289c2798434"} Jan 23 06:49:29 crc kubenswrapper[4937]: I0123 06:49:29.111188 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-9xwkd" event={"ID":"c8fbb575-36e1-452e-8800-9b310540b205","Type":"ContainerStarted","Data":"ddc89ee22461016a2d94c63928c70ec56c75fe35fb991711d6783e8136c6cc1a"} Jan 23 06:49:29 crc kubenswrapper[4937]: I0123 06:49:29.113301 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-c5c4f" event={"ID":"b6e325d5-7535-41a5-a2a8-3f01fb4b8c0e","Type":"ContainerStarted","Data":"44c8f4e333ea64c0dd68460b8725f362988c024db6d4a7920fdcb9519ac32e32"} Jan 23 06:49:29 crc kubenswrapper[4937]: E0123 06:49:29.113891 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-9xwkd" podUID="c8fbb575-36e1-452e-8800-9b310540b205" Jan 23 06:49:29 crc kubenswrapper[4937]: I0123 06:49:29.116058 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qwg2k" event={"ID":"c3c7df6a-2e6c-4a57-b7b6-ed070b4eeb3a","Type":"ContainerStarted","Data":"e277d02f804dd2e2f40b53f78e0f9a63ebf0f615eec77b0ee4b4b4da7957885f"} Jan 23 06:49:30 crc kubenswrapper[4937]: E0123 06:49:30.130727 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.44:5001/openstack-k8s-operators/watcher-operator:66a2a7ca52c97ab09e74ddf1b8f1663bf04650c3\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5d9cd495bb-kw22f" podUID="23892b9d-9ef2-4c33-aaa0-1c858cd9255d" Jan 23 06:49:30 crc kubenswrapper[4937]: E0123 06:49:30.131101 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9q4dg" podUID="17b47330-4556-499e-83dd-7e67a9a73824" Jan 23 06:49:30 crc kubenswrapper[4937]: E0123 06:49:30.132381 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-9xwkd" podUID="c8fbb575-36e1-452e-8800-9b310540b205" Jan 23 06:49:30 crc kubenswrapper[4937]: E0123 06:49:30.133221 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4cdbv" podUID="7f7029d6-79d1-4698-91ca-bc61d66124ab" Jan 23 06:49:30 crc kubenswrapper[4937]: I0123 06:49:30.421455 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c765e89-13b6-4588-a1a4-697b5553bdd0-cert\") pod \"infra-operator-controller-manager-58749ffdfb-9kphf\" (UID: \"6c765e89-13b6-4588-a1a4-697b5553bdd0\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf" Jan 23 06:49:30 crc kubenswrapper[4937]: E0123 06:49:30.421696 4937 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 06:49:30 crc kubenswrapper[4937]: E0123 06:49:30.421788 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c765e89-13b6-4588-a1a4-697b5553bdd0-cert podName:6c765e89-13b6-4588-a1a4-697b5553bdd0 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:34.421768298 +0000 UTC m=+974.225534951 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c765e89-13b6-4588-a1a4-697b5553bdd0-cert") pod "infra-operator-controller-manager-58749ffdfb-9kphf" (UID: "6c765e89-13b6-4588-a1a4-697b5553bdd0") : secret "infra-operator-webhook-server-cert" not found Jan 23 06:49:30 crc kubenswrapper[4937]: I0123 06:49:30.728337 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k\" (UID: \"13a2ad28-4ba2-4470-8e0d-ca42de8e6653\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" Jan 23 06:49:30 crc kubenswrapper[4937]: E0123 06:49:30.728544 4937 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:49:30 crc kubenswrapper[4937]: E0123 06:49:30.728641 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-cert podName:13a2ad28-4ba2-4470-8e0d-ca42de8e6653 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:34.728585378 +0000 UTC m=+974.532352021 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" (UID: "13a2ad28-4ba2-4470-8e0d-ca42de8e6653") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:49:31 crc kubenswrapper[4937]: I0123 06:49:31.031876 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-metrics-certs\") pod \"openstack-operator-controller-manager-8599c9cdcc-j97fl\" (UID: \"a7311b13-72db-4f12-9617-039ee018dee7\") " pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:31 crc kubenswrapper[4937]: I0123 06:49:31.031927 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-webhook-certs\") pod \"openstack-operator-controller-manager-8599c9cdcc-j97fl\" (UID: \"a7311b13-72db-4f12-9617-039ee018dee7\") " pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:31 crc kubenswrapper[4937]: E0123 06:49:31.032042 4937 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 06:49:31 crc kubenswrapper[4937]: E0123 06:49:31.032092 4937 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 06:49:31 crc kubenswrapper[4937]: E0123 06:49:31.032118 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-metrics-certs podName:a7311b13-72db-4f12-9617-039ee018dee7 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:35.032097133 +0000 UTC m=+974.835863906 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-metrics-certs") pod "openstack-operator-controller-manager-8599c9cdcc-j97fl" (UID: "a7311b13-72db-4f12-9617-039ee018dee7") : secret "metrics-server-cert" not found Jan 23 06:49:31 crc kubenswrapper[4937]: E0123 06:49:31.032141 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-webhook-certs podName:a7311b13-72db-4f12-9617-039ee018dee7 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:35.032127154 +0000 UTC m=+974.835893807 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-webhook-certs") pod "openstack-operator-controller-manager-8599c9cdcc-j97fl" (UID: "a7311b13-72db-4f12-9617-039ee018dee7") : secret "webhook-server-cert" not found Jan 23 06:49:31 crc kubenswrapper[4937]: I0123 06:49:31.593513 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hgnzh"] Jan 23 06:49:31 crc kubenswrapper[4937]: I0123 06:49:31.595219 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgnzh" Jan 23 06:49:31 crc kubenswrapper[4937]: I0123 06:49:31.601401 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hgnzh"] Jan 23 06:49:31 crc kubenswrapper[4937]: I0123 06:49:31.641486 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t28t4\" (UniqueName: \"kubernetes.io/projected/e5f42c4b-4eec-42eb-910f-9a5e1be37121-kube-api-access-t28t4\") pod \"community-operators-hgnzh\" (UID: \"e5f42c4b-4eec-42eb-910f-9a5e1be37121\") " pod="openshift-marketplace/community-operators-hgnzh" Jan 23 06:49:31 crc kubenswrapper[4937]: I0123 06:49:31.641764 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5f42c4b-4eec-42eb-910f-9a5e1be37121-catalog-content\") pod \"community-operators-hgnzh\" (UID: \"e5f42c4b-4eec-42eb-910f-9a5e1be37121\") " pod="openshift-marketplace/community-operators-hgnzh" Jan 23 06:49:31 crc kubenswrapper[4937]: I0123 06:49:31.641818 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5f42c4b-4eec-42eb-910f-9a5e1be37121-utilities\") pod \"community-operators-hgnzh\" (UID: \"e5f42c4b-4eec-42eb-910f-9a5e1be37121\") " pod="openshift-marketplace/community-operators-hgnzh" Jan 23 06:49:31 crc kubenswrapper[4937]: I0123 06:49:31.745345 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t28t4\" (UniqueName: \"kubernetes.io/projected/e5f42c4b-4eec-42eb-910f-9a5e1be37121-kube-api-access-t28t4\") pod \"community-operators-hgnzh\" (UID: \"e5f42c4b-4eec-42eb-910f-9a5e1be37121\") " pod="openshift-marketplace/community-operators-hgnzh" Jan 23 06:49:31 crc kubenswrapper[4937]: I0123 06:49:31.745568 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5f42c4b-4eec-42eb-910f-9a5e1be37121-catalog-content\") pod \"community-operators-hgnzh\" (UID: \"e5f42c4b-4eec-42eb-910f-9a5e1be37121\") " pod="openshift-marketplace/community-operators-hgnzh" Jan 23 06:49:31 crc kubenswrapper[4937]: I0123 06:49:31.745633 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5f42c4b-4eec-42eb-910f-9a5e1be37121-utilities\") pod \"community-operators-hgnzh\" (UID: \"e5f42c4b-4eec-42eb-910f-9a5e1be37121\") " pod="openshift-marketplace/community-operators-hgnzh" Jan 23 06:49:31 crc kubenswrapper[4937]: I0123 06:49:31.746168 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5f42c4b-4eec-42eb-910f-9a5e1be37121-utilities\") pod \"community-operators-hgnzh\" (UID: \"e5f42c4b-4eec-42eb-910f-9a5e1be37121\") " pod="openshift-marketplace/community-operators-hgnzh" Jan 23 06:49:31 crc kubenswrapper[4937]: I0123 06:49:31.746180 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5f42c4b-4eec-42eb-910f-9a5e1be37121-catalog-content\") pod \"community-operators-hgnzh\" (UID: \"e5f42c4b-4eec-42eb-910f-9a5e1be37121\") " pod="openshift-marketplace/community-operators-hgnzh" Jan 23 06:49:31 crc kubenswrapper[4937]: I0123 06:49:31.787562 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t28t4\" (UniqueName: \"kubernetes.io/projected/e5f42c4b-4eec-42eb-910f-9a5e1be37121-kube-api-access-t28t4\") pod \"community-operators-hgnzh\" (UID: \"e5f42c4b-4eec-42eb-910f-9a5e1be37121\") " pod="openshift-marketplace/community-operators-hgnzh" Jan 23 06:49:31 crc kubenswrapper[4937]: I0123 06:49:31.957407 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgnzh" Jan 23 06:49:34 crc kubenswrapper[4937]: I0123 06:49:34.485665 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c765e89-13b6-4588-a1a4-697b5553bdd0-cert\") pod \"infra-operator-controller-manager-58749ffdfb-9kphf\" (UID: \"6c765e89-13b6-4588-a1a4-697b5553bdd0\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf" Jan 23 06:49:34 crc kubenswrapper[4937]: E0123 06:49:34.486039 4937 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 06:49:34 crc kubenswrapper[4937]: E0123 06:49:34.486483 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c765e89-13b6-4588-a1a4-697b5553bdd0-cert podName:6c765e89-13b6-4588-a1a4-697b5553bdd0 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:42.486458877 +0000 UTC m=+982.290225540 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6c765e89-13b6-4588-a1a4-697b5553bdd0-cert") pod "infra-operator-controller-manager-58749ffdfb-9kphf" (UID: "6c765e89-13b6-4588-a1a4-697b5553bdd0") : secret "infra-operator-webhook-server-cert" not found Jan 23 06:49:34 crc kubenswrapper[4937]: I0123 06:49:34.791489 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k\" (UID: \"13a2ad28-4ba2-4470-8e0d-ca42de8e6653\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" Jan 23 06:49:34 crc kubenswrapper[4937]: E0123 06:49:34.791747 4937 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:49:34 crc kubenswrapper[4937]: E0123 06:49:34.791869 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-cert podName:13a2ad28-4ba2-4470-8e0d-ca42de8e6653 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:42.791833084 +0000 UTC m=+982.595599747 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" (UID: "13a2ad28-4ba2-4470-8e0d-ca42de8e6653") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 06:49:35 crc kubenswrapper[4937]: I0123 06:49:35.095766 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-metrics-certs\") pod \"openstack-operator-controller-manager-8599c9cdcc-j97fl\" (UID: \"a7311b13-72db-4f12-9617-039ee018dee7\") " pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:35 crc kubenswrapper[4937]: I0123 06:49:35.095830 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-webhook-certs\") pod \"openstack-operator-controller-manager-8599c9cdcc-j97fl\" (UID: \"a7311b13-72db-4f12-9617-039ee018dee7\") " pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:35 crc kubenswrapper[4937]: E0123 06:49:35.096026 4937 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 06:49:35 crc kubenswrapper[4937]: E0123 06:49:35.096147 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-metrics-certs podName:a7311b13-72db-4f12-9617-039ee018dee7 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:43.096119501 +0000 UTC m=+982.899886344 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-metrics-certs") pod "openstack-operator-controller-manager-8599c9cdcc-j97fl" (UID: "a7311b13-72db-4f12-9617-039ee018dee7") : secret "metrics-server-cert" not found Jan 23 06:49:35 crc kubenswrapper[4937]: E0123 06:49:35.096052 4937 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 06:49:35 crc kubenswrapper[4937]: E0123 06:49:35.096722 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-webhook-certs podName:a7311b13-72db-4f12-9617-039ee018dee7 nodeName:}" failed. No retries permitted until 2026-01-23 06:49:43.096705297 +0000 UTC m=+982.900472150 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-webhook-certs") pod "openstack-operator-controller-manager-8599c9cdcc-j97fl" (UID: "a7311b13-72db-4f12-9617-039ee018dee7") : secret "webhook-server-cert" not found Jan 23 06:49:39 crc kubenswrapper[4937]: E0123 06:49:39.367730 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 23 06:49:39 crc kubenswrapper[4937]: E0123 06:49:39.368059 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f5zml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-gtfpn_openstack-operators(a97495b6-7a9f-454e-8197-af75abec2f3e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:49:39 crc kubenswrapper[4937]: E0123 06:49:39.370047 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gtfpn" podUID="a97495b6-7a9f-454e-8197-af75abec2f3e" Jan 23 06:49:40 crc kubenswrapper[4937]: E0123 06:49:40.139449 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 23 06:49:40 crc kubenswrapper[4937]: E0123 06:49:40.139665 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-frjb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-99bb7_openstack-operators(75724f78-fc93-46a1-bfb2-037fe76b1edd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:49:40 crc kubenswrapper[4937]: E0123 06:49:40.141704 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-99bb7" podUID="75724f78-fc93-46a1-bfb2-037fe76b1edd" Jan 23 06:49:40 crc kubenswrapper[4937]: E0123 06:49:40.201373 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gtfpn" podUID="a97495b6-7a9f-454e-8197-af75abec2f3e" Jan 23 06:49:40 crc kubenswrapper[4937]: E0123 06:49:40.203491 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-99bb7" podUID="75724f78-fc93-46a1-bfb2-037fe76b1edd" Jan 23 06:49:41 crc kubenswrapper[4937]: I0123 06:49:41.527278 4937 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 06:49:41 crc kubenswrapper[4937]: E0123 06:49:41.931895 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 23 06:49:41 crc kubenswrapper[4937]: E0123 06:49:41.932083 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jgwmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-5bt2v_openstack-operators(f255c056-65ce-42fc-9eb6-29395dcde9a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:49:41 crc kubenswrapper[4937]: E0123 06:49:41.933324 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-5bt2v" podUID="f255c056-65ce-42fc-9eb6-29395dcde9a3" Jan 23 06:49:42 crc kubenswrapper[4937]: E0123 06:49:42.211796 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-5bt2v" podUID="f255c056-65ce-42fc-9eb6-29395dcde9a3" Jan 23 06:49:42 crc kubenswrapper[4937]: I0123 06:49:42.516865 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c765e89-13b6-4588-a1a4-697b5553bdd0-cert\") pod \"infra-operator-controller-manager-58749ffdfb-9kphf\" (UID: \"6c765e89-13b6-4588-a1a4-697b5553bdd0\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf" Jan 23 06:49:42 crc kubenswrapper[4937]: I0123 06:49:42.536976 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c765e89-13b6-4588-a1a4-697b5553bdd0-cert\") pod \"infra-operator-controller-manager-58749ffdfb-9kphf\" (UID: \"6c765e89-13b6-4588-a1a4-697b5553bdd0\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf" Jan 23 06:49:42 crc kubenswrapper[4937]: I0123 06:49:42.779053 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf" Jan 23 06:49:42 crc kubenswrapper[4937]: I0123 06:49:42.821628 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k\" (UID: \"13a2ad28-4ba2-4470-8e0d-ca42de8e6653\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" Jan 23 06:49:42 crc kubenswrapper[4937]: I0123 06:49:42.824709 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13a2ad28-4ba2-4470-8e0d-ca42de8e6653-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k\" (UID: \"13a2ad28-4ba2-4470-8e0d-ca42de8e6653\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" Jan 23 06:49:43 crc kubenswrapper[4937]: I0123 06:49:43.108214 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" Jan 23 06:49:43 crc kubenswrapper[4937]: I0123 06:49:43.125340 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-metrics-certs\") pod \"openstack-operator-controller-manager-8599c9cdcc-j97fl\" (UID: \"a7311b13-72db-4f12-9617-039ee018dee7\") " pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:43 crc kubenswrapper[4937]: I0123 06:49:43.125380 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-webhook-certs\") pod \"openstack-operator-controller-manager-8599c9cdcc-j97fl\" (UID: \"a7311b13-72db-4f12-9617-039ee018dee7\") " pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:43 crc kubenswrapper[4937]: I0123 06:49:43.128637 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-webhook-certs\") pod \"openstack-operator-controller-manager-8599c9cdcc-j97fl\" (UID: \"a7311b13-72db-4f12-9617-039ee018dee7\") " pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:43 crc kubenswrapper[4937]: I0123 06:49:43.128784 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7311b13-72db-4f12-9617-039ee018dee7-metrics-certs\") pod \"openstack-operator-controller-manager-8599c9cdcc-j97fl\" (UID: \"a7311b13-72db-4f12-9617-039ee018dee7\") " pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:43 crc kubenswrapper[4937]: I0123 06:49:43.367697 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:49:54 crc kubenswrapper[4937]: E0123 06:49:54.861192 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831" Jan 23 06:49:54 crc kubenswrapper[4937]: E0123 06:49:54.862026 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fjdxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-wmhnh_openstack-operators(9d773aa4-667d-431e-b63f-0f4c45d22d58): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:49:54 crc kubenswrapper[4937]: E0123 06:49:54.863330 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-wmhnh" podUID="9d773aa4-667d-431e-b63f-0f4c45d22d58" Jan 23 06:49:55 crc kubenswrapper[4937]: E0123 06:49:55.300539 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-wmhnh" podUID="9d773aa4-667d-431e-b63f-0f4c45d22d58" Jan 23 06:49:57 crc kubenswrapper[4937]: E0123 06:49:57.135023 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 23 06:49:57 crc kubenswrapper[4937]: E0123 06:49:57.135518 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2xsdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-qwg2k_openstack-operators(c3c7df6a-2e6c-4a57-b7b6-ed070b4eeb3a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:49:57 crc kubenswrapper[4937]: E0123 06:49:57.137984 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qwg2k" podUID="c3c7df6a-2e6c-4a57-b7b6-ed070b4eeb3a" Jan 23 06:49:57 crc kubenswrapper[4937]: E0123 06:49:57.313650 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qwg2k" podUID="c3c7df6a-2e6c-4a57-b7b6-ed070b4eeb3a" Jan 23 06:49:57 crc kubenswrapper[4937]: E0123 06:49:57.692647 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 23 06:49:57 crc kubenswrapper[4937]: E0123 06:49:57.692878 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2b7kk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-7qvkg_openstack-operators(a58eade1-a27e-42ed-8a1b-f43803d53498): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:49:57 crc kubenswrapper[4937]: E0123 06:49:57.694234 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7qvkg" podUID="a58eade1-a27e-42ed-8a1b-f43803d53498" Jan 23 06:49:58 crc kubenswrapper[4937]: E0123 06:49:58.318961 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7qvkg" podUID="a58eade1-a27e-42ed-8a1b-f43803d53498" Jan 23 06:50:01 crc kubenswrapper[4937]: I0123 06:50:01.029563 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hgnzh"] Jan 23 06:50:01 crc kubenswrapper[4937]: W0123 06:50:01.174417 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5f42c4b_4eec_42eb_910f_9a5e1be37121.slice/crio-a25a7583d43cc5dcb6e835e05ad817dcf6dd6c2e39f364f4f92d08c0aadf12e0 WatchSource:0}: Error finding container a25a7583d43cc5dcb6e835e05ad817dcf6dd6c2e39f364f4f92d08c0aadf12e0: Status 404 returned error can't find the container with id a25a7583d43cc5dcb6e835e05ad817dcf6dd6c2e39f364f4f92d08c0aadf12e0 Jan 23 06:50:01 crc kubenswrapper[4937]: I0123 06:50:01.348170 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-c5c4f" event={"ID":"b6e325d5-7535-41a5-a2a8-3f01fb4b8c0e","Type":"ContainerStarted","Data":"e93c406b7b9045c6249dcf3f514cb2ea06c5ab76cc311a19b080cdc321579fb9"} Jan 23 06:50:01 crc kubenswrapper[4937]: I0123 06:50:01.348521 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-c5c4f" Jan 23 06:50:01 crc kubenswrapper[4937]: I0123 06:50:01.350636 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-4k4vj" event={"ID":"166795a6-99d8-4030-89eb-7bdef35519dc","Type":"ContainerStarted","Data":"a8b1a1deac03c064a6ce0a8dc00e0e90fe452bbe245092dfbfed1015739c6cdd"} Jan 23 06:50:01 crc kubenswrapper[4937]: I0123 06:50:01.350669 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-4k4vj" Jan 23 06:50:01 crc kubenswrapper[4937]: I0123 06:50:01.368936 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-k77l4" event={"ID":"a0de6431-d5d9-46ec-a7bf-b4c3c999ba22","Type":"ContainerStarted","Data":"3e22f011bff2a636966a73c164d1ae5389ad50ee961114daa05ebcbc43cd2867"} Jan 23 06:50:01 crc kubenswrapper[4937]: I0123 06:50:01.374944 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-c5c4f" podStartSLOduration=5.82664611 podStartE2EDuration="35.374926705s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:49:28.118539132 +0000 UTC m=+967.922305785" lastFinishedPulling="2026-01-23 06:49:57.666819727 +0000 UTC m=+997.470586380" observedRunningTime="2026-01-23 06:50:01.363969741 +0000 UTC m=+1001.167736394" watchObservedRunningTime="2026-01-23 06:50:01.374926705 +0000 UTC m=+1001.178693358" Jan 23 06:50:01 crc kubenswrapper[4937]: I0123 06:50:01.377739 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k"] Jan 23 06:50:01 crc kubenswrapper[4937]: I0123 06:50:01.377789 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-k77l4" Jan 23 06:50:01 crc kubenswrapper[4937]: I0123 06:50:01.379853 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgnzh" event={"ID":"e5f42c4b-4eec-42eb-910f-9a5e1be37121","Type":"ContainerStarted","Data":"a25a7583d43cc5dcb6e835e05ad817dcf6dd6c2e39f364f4f92d08c0aadf12e0"} Jan 23 06:50:01 crc kubenswrapper[4937]: I0123 06:50:01.399931 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-4k4vj" podStartSLOduration=5.417310577 podStartE2EDuration="35.399912786s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:49:27.684231559 +0000 UTC m=+967.487998212" lastFinishedPulling="2026-01-23 06:49:57.666833768 +0000 UTC m=+997.470600421" observedRunningTime="2026-01-23 06:50:01.393237267 +0000 UTC m=+1001.197003920" watchObservedRunningTime="2026-01-23 06:50:01.399912786 +0000 UTC m=+1001.203679439" Jan 23 06:50:01 crc kubenswrapper[4937]: I0123 06:50:01.424771 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-k77l4" podStartSLOduration=4.729227732 podStartE2EDuration="35.424583777s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:49:27.378218879 +0000 UTC m=+967.181985532" lastFinishedPulling="2026-01-23 06:49:58.073574924 +0000 UTC m=+997.877341577" observedRunningTime="2026-01-23 06:50:01.422717918 +0000 UTC m=+1001.226484571" watchObservedRunningTime="2026-01-23 06:50:01.424583777 +0000 UTC m=+1001.228350430" Jan 23 06:50:01 crc kubenswrapper[4937]: I0123 06:50:01.448976 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf"] Jan 23 06:50:01 crc kubenswrapper[4937]: I0123 06:50:01.466577 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl"] Jan 23 06:50:01 crc kubenswrapper[4937]: W0123 06:50:01.478391 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13a2ad28_4ba2_4470_8e0d_ca42de8e6653.slice/crio-7b8abcdbd551bea77151ddb0da82ee5618c42dcf823783f70ad9b04abc76978f WatchSource:0}: Error finding container 7b8abcdbd551bea77151ddb0da82ee5618c42dcf823783f70ad9b04abc76978f: Status 404 returned error can't find the container with id 7b8abcdbd551bea77151ddb0da82ee5618c42dcf823783f70ad9b04abc76978f Jan 23 06:50:01 crc kubenswrapper[4937]: W0123 06:50:01.539772 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c765e89_13b6_4588_a1a4_697b5553bdd0.slice/crio-1f04532acfde36d6faac79b0724dad02bf4bb748f2cc063a064c02fb4f8a628a WatchSource:0}: Error finding container 1f04532acfde36d6faac79b0724dad02bf4bb748f2cc063a064c02fb4f8a628a: Status 404 returned error can't find the container with id 1f04532acfde36d6faac79b0724dad02bf4bb748f2cc063a064c02fb4f8a628a Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.416962 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-pxrfl" event={"ID":"5ba7dbd8-68af-4677-bd8e-686c19912769","Type":"ContainerStarted","Data":"af8d59b6cb609689bea96be29e6130055dddcc2bcfd7b095f7aa185fad4c5d5d"} Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.418076 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-pxrfl" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.424480 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hfslp" event={"ID":"7e434a74-d86f-4d68-867f-bad41bdf53b5","Type":"ContainerStarted","Data":"be32ec42f5c330f1787a6fbbf6ba4ff57207a878d524a23c3ad8d23b6a098270"} Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.425122 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hfslp" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.427411 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gtfpn" event={"ID":"a97495b6-7a9f-454e-8197-af75abec2f3e","Type":"ContainerStarted","Data":"b1bb5a993869f176794c1d8ad97e3e6175f6ac2fb8458d30e7cfa6e335b5fdf5"} Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.427767 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gtfpn" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.448129 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-pxrfl" podStartSLOduration=6.231603826 podStartE2EDuration="36.448106809s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:49:28.386022519 +0000 UTC m=+968.189789172" lastFinishedPulling="2026-01-23 06:49:58.602525502 +0000 UTC m=+998.406292155" observedRunningTime="2026-01-23 06:50:02.439808957 +0000 UTC m=+1002.243575610" watchObservedRunningTime="2026-01-23 06:50:02.448106809 +0000 UTC m=+1002.251873462" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.452418 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-9xwkd" event={"ID":"c8fbb575-36e1-452e-8800-9b310540b205","Type":"ContainerStarted","Data":"22b19f04c8a42204e25c1dfa88c92d18827a383bfb611acbba3c2e01714e409c"} Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.453079 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-9xwkd" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.470941 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hfslp" podStartSLOduration=6.487796727 podStartE2EDuration="36.470926693s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:49:28.090604183 +0000 UTC m=+967.894370836" lastFinishedPulling="2026-01-23 06:49:58.073734149 +0000 UTC m=+997.877500802" observedRunningTime="2026-01-23 06:50:02.466094502 +0000 UTC m=+1002.269861155" watchObservedRunningTime="2026-01-23 06:50:02.470926693 +0000 UTC m=+1002.274693346" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.478974 4937 generic.go:334] "Generic (PLEG): container finished" podID="e5f42c4b-4eec-42eb-910f-9a5e1be37121" containerID="025e19659afbe14b7bd5f4f1137366b334deb3adc0da502878990653c57f875b" exitCode=0 Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.479032 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgnzh" event={"ID":"e5f42c4b-4eec-42eb-910f-9a5e1be37121","Type":"ContainerDied","Data":"025e19659afbe14b7bd5f4f1137366b334deb3adc0da502878990653c57f875b"} Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.491969 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gtfpn" podStartSLOduration=3.5053308210000003 podStartE2EDuration="36.491936777s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:49:27.645717437 +0000 UTC m=+967.449484080" lastFinishedPulling="2026-01-23 06:50:00.632323383 +0000 UTC m=+1000.436090036" observedRunningTime="2026-01-23 06:50:02.488038992 +0000 UTC m=+1002.291805645" watchObservedRunningTime="2026-01-23 06:50:02.491936777 +0000 UTC m=+1002.295703440" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.515462 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xrgdc" event={"ID":"7d322eb6-3116-4a70-8845-e62977302d86","Type":"ContainerStarted","Data":"642d08550b77fe072a94d5bb90b895e8d802f4606c273985774d1a96d11c29d3"} Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.515886 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xrgdc" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.518874 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-qkqbb" event={"ID":"62aa0d56-f0fe-4cc7-a5dd-b15b7471844d","Type":"ContainerStarted","Data":"40fd3d78e493ac83ec804661bb84d57582826d7b9d6f99011d5d75eb048ab9fb"} Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.519714 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-qkqbb" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.521422 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4cdbv" event={"ID":"7f7029d6-79d1-4698-91ca-bc61d66124ab","Type":"ContainerStarted","Data":"a109de26f100dc1c7889af0758d8e22e38c22d7829860d3f0e30466e0953830d"} Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.570545 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4cdbv" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.620011 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2rqc7" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.620049 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2rqc7" event={"ID":"77fd44ee-ceab-4595-890e-7310ec8b6cb2","Type":"ContainerStarted","Data":"7e1aef6e83bde90e896aa0890ff80abf74ab548f46a4bf51d74e28519bc89409"} Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.623563 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-9xwkd" podStartSLOduration=4.368366604 podStartE2EDuration="36.623547439s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:49:28.672855335 +0000 UTC m=+968.476621988" lastFinishedPulling="2026-01-23 06:50:00.92803617 +0000 UTC m=+1000.731802823" observedRunningTime="2026-01-23 06:50:02.593446161 +0000 UTC m=+1002.397212814" watchObservedRunningTime="2026-01-23 06:50:02.623547439 +0000 UTC m=+1002.427314092" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.634926 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-99bb7" event={"ID":"75724f78-fc93-46a1-bfb2-037fe76b1edd","Type":"ContainerStarted","Data":"d5d0ed7435d212797570bdec04be97695ebf82a0a8d578dca22a6899abbf8f1b"} Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.635939 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-99bb7" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.641927 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-qkqbb" podStartSLOduration=6.167478665 podStartE2EDuration="36.641911242s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:49:27.599072836 +0000 UTC m=+967.402839489" lastFinishedPulling="2026-01-23 06:49:58.073505413 +0000 UTC m=+997.877272066" observedRunningTime="2026-01-23 06:50:02.64183452 +0000 UTC m=+1002.445601173" watchObservedRunningTime="2026-01-23 06:50:02.641911242 +0000 UTC m=+1002.445677905" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.656845 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sd8vq" event={"ID":"f1cca66c-d0b5-488a-a3d4-9e0b1714c33c","Type":"ContainerStarted","Data":"87d74b7973b06b8f50da6872ed94ac40d35e8c748ee05012c91af7c5af694de3"} Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.657462 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sd8vq" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.675881 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" event={"ID":"a7311b13-72db-4f12-9617-039ee018dee7","Type":"ContainerStarted","Data":"71761992be4f628753273b5860a16f5baa6650735d5c45ae185277cfb069f6c8"} Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.675926 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" event={"ID":"a7311b13-72db-4f12-9617-039ee018dee7","Type":"ContainerStarted","Data":"6a77c5653f96bf45e8c10fb9f78851cbd591ed18d1b8dd049ffdd1c047c4164f"} Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.676546 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.686702 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" event={"ID":"13a2ad28-4ba2-4470-8e0d-ca42de8e6653","Type":"ContainerStarted","Data":"7b8abcdbd551bea77151ddb0da82ee5618c42dcf823783f70ad9b04abc76978f"} Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.688059 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5l6fx" event={"ID":"e1351960-51ec-4735-9019-d267f29568d5","Type":"ContainerStarted","Data":"e734219bdabf07ee38faf75930777a43804afaef518a31e3b00847ac87d7fc22"} Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.688611 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5l6fx" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.701885 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf" event={"ID":"6c765e89-13b6-4588-a1a4-697b5553bdd0","Type":"ContainerStarted","Data":"1f04532acfde36d6faac79b0724dad02bf4bb748f2cc063a064c02fb4f8a628a"} Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.709290 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4cdbv" podStartSLOduration=4.44262319 podStartE2EDuration="36.70927212s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:49:28.660883946 +0000 UTC m=+968.464650609" lastFinishedPulling="2026-01-23 06:50:00.927532886 +0000 UTC m=+1000.731299539" observedRunningTime="2026-01-23 06:50:02.704142582 +0000 UTC m=+1002.507909235" watchObservedRunningTime="2026-01-23 06:50:02.70927212 +0000 UTC m=+1002.513038773" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.709722 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xrgdc" podStartSLOduration=6.8515715440000005 podStartE2EDuration="36.709715151s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:49:28.215066568 +0000 UTC m=+968.018833221" lastFinishedPulling="2026-01-23 06:49:58.073210175 +0000 UTC m=+997.876976828" observedRunningTime="2026-01-23 06:50:02.67574105 +0000 UTC m=+1002.479507703" watchObservedRunningTime="2026-01-23 06:50:02.709715151 +0000 UTC m=+1002.513481804" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.726936 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9q4dg" event={"ID":"17b47330-4556-499e-83dd-7e67a9a73824","Type":"ContainerStarted","Data":"ea87f1ce05575e407449213843421482f4237db5f3bfae6e566868135eb8ffc0"} Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.727785 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9q4dg" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.744651 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2rqc7" podStartSLOduration=7.288219397 podStartE2EDuration="36.744628019s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:49:28.616169705 +0000 UTC m=+968.419936358" lastFinishedPulling="2026-01-23 06:49:58.072578317 +0000 UTC m=+997.876344980" observedRunningTime="2026-01-23 06:50:02.739230783 +0000 UTC m=+1002.542997456" watchObservedRunningTime="2026-01-23 06:50:02.744628019 +0000 UTC m=+1002.548394682" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.752915 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5d9cd495bb-kw22f" event={"ID":"23892b9d-9ef2-4c33-aaa0-1c858cd9255d","Type":"ContainerStarted","Data":"39b4332abb3a7494559a6d340415aee352880f6f0a629325a73180fc88733306"} Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.753127 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5d9cd495bb-kw22f" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.766072 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-5bt2v" event={"ID":"f255c056-65ce-42fc-9eb6-29395dcde9a3","Type":"ContainerStarted","Data":"e70a086320c31fd33be7776dad9af93bb8e6a36a441aef655e9b6b200c832bb3"} Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.766377 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-5bt2v" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.781910 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sd8vq" podStartSLOduration=6.9162962409999995 podStartE2EDuration="36.781890449s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:49:28.206886017 +0000 UTC m=+968.010652670" lastFinishedPulling="2026-01-23 06:49:58.072480215 +0000 UTC m=+997.876246878" observedRunningTime="2026-01-23 06:50:02.767147883 +0000 UTC m=+1002.570914536" watchObservedRunningTime="2026-01-23 06:50:02.781890449 +0000 UTC m=+1002.585657102" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.847434 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" podStartSLOduration=35.847417348 podStartE2EDuration="35.847417348s" podCreationTimestamp="2026-01-23 06:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:50:02.809711256 +0000 UTC m=+1002.613477919" watchObservedRunningTime="2026-01-23 06:50:02.847417348 +0000 UTC m=+1002.651184001" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.863924 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5l6fx" podStartSLOduration=7.098629519 podStartE2EDuration="36.86390261s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:49:27.901299849 +0000 UTC m=+967.705066502" lastFinishedPulling="2026-01-23 06:49:57.66657294 +0000 UTC m=+997.470339593" observedRunningTime="2026-01-23 06:50:02.860814427 +0000 UTC m=+1002.664581070" watchObservedRunningTime="2026-01-23 06:50:02.86390261 +0000 UTC m=+1002.667669263" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.894549 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-99bb7" podStartSLOduration=4.62996752 podStartE2EDuration="36.894528382s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:49:28.366534048 +0000 UTC m=+968.170300701" lastFinishedPulling="2026-01-23 06:50:00.63109491 +0000 UTC m=+1000.434861563" observedRunningTime="2026-01-23 06:50:02.89107329 +0000 UTC m=+1002.694839953" watchObservedRunningTime="2026-01-23 06:50:02.894528382 +0000 UTC m=+1002.698295055" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.931497 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9q4dg" podStartSLOduration=4.65877682 podStartE2EDuration="36.931483744s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:49:28.653426505 +0000 UTC m=+968.457193158" lastFinishedPulling="2026-01-23 06:50:00.926133429 +0000 UTC m=+1000.729900082" observedRunningTime="2026-01-23 06:50:02.924859216 +0000 UTC m=+1002.728625879" watchObservedRunningTime="2026-01-23 06:50:02.931483744 +0000 UTC m=+1002.735250397" Jan 23 06:50:02 crc kubenswrapper[4937]: I0123 06:50:02.968132 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5d9cd495bb-kw22f" podStartSLOduration=4.136183541 podStartE2EDuration="36.968112037s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:49:28.67306337 +0000 UTC m=+968.476830013" lastFinishedPulling="2026-01-23 06:50:01.504991856 +0000 UTC m=+1001.308758509" observedRunningTime="2026-01-23 06:50:02.962580909 +0000 UTC m=+1002.766347562" watchObservedRunningTime="2026-01-23 06:50:02.968112037 +0000 UTC m=+1002.771878690" Jan 23 06:50:03 crc kubenswrapper[4937]: I0123 06:50:03.018106 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-5bt2v" podStartSLOduration=4.514429019 podStartE2EDuration="37.018083018s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:49:28.128033217 +0000 UTC m=+967.931799870" lastFinishedPulling="2026-01-23 06:50:00.631687216 +0000 UTC m=+1000.435453869" observedRunningTime="2026-01-23 06:50:03.001982756 +0000 UTC m=+1002.805749429" watchObservedRunningTime="2026-01-23 06:50:03.018083018 +0000 UTC m=+1002.821849671" Jan 23 06:50:04 crc kubenswrapper[4937]: I0123 06:50:04.784441 4937 generic.go:334] "Generic (PLEG): container finished" podID="e5f42c4b-4eec-42eb-910f-9a5e1be37121" containerID="9f7c743de5d48322e8ea48dc1c24b46b37339734469474e6507a45f7e23168d9" exitCode=0 Jan 23 06:50:04 crc kubenswrapper[4937]: I0123 06:50:04.784509 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgnzh" event={"ID":"e5f42c4b-4eec-42eb-910f-9a5e1be37121","Type":"ContainerDied","Data":"9f7c743de5d48322e8ea48dc1c24b46b37339734469474e6507a45f7e23168d9"} Jan 23 06:50:06 crc kubenswrapper[4937]: I0123 06:50:06.708354 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-qkqbb" Jan 23 06:50:06 crc kubenswrapper[4937]: I0123 06:50:06.722974 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-k77l4" Jan 23 06:50:06 crc kubenswrapper[4937]: I0123 06:50:06.774695 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gtfpn" Jan 23 06:50:06 crc kubenswrapper[4937]: I0123 06:50:06.787452 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-4k4vj" Jan 23 06:50:06 crc kubenswrapper[4937]: I0123 06:50:06.797710 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5l6fx" Jan 23 06:50:06 crc kubenswrapper[4937]: I0123 06:50:06.813097 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" event={"ID":"13a2ad28-4ba2-4470-8e0d-ca42de8e6653","Type":"ContainerStarted","Data":"811ff2e7a63c8310b817e5debe40f1d195f03b9069aa50b479fbd4560a4bee45"} Jan 23 06:50:06 crc kubenswrapper[4937]: I0123 06:50:06.813965 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" Jan 23 06:50:06 crc kubenswrapper[4937]: I0123 06:50:06.815463 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf" event={"ID":"6c765e89-13b6-4588-a1a4-697b5553bdd0","Type":"ContainerStarted","Data":"0d70656881af8e3d7e9b8aa416b293a64668b9ef83275b2384f32dd47bfdadb7"} Jan 23 06:50:06 crc kubenswrapper[4937]: I0123 06:50:06.816960 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf" Jan 23 06:50:06 crc kubenswrapper[4937]: I0123 06:50:06.822549 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgnzh" event={"ID":"e5f42c4b-4eec-42eb-910f-9a5e1be37121","Type":"ContainerStarted","Data":"8ac99bcd6aff130bd9ba56c872aa16879c4c2edffada40a2ab4e3de52ee2522d"} Jan 23 06:50:06 crc kubenswrapper[4937]: I0123 06:50:06.857840 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hfslp" Jan 23 06:50:06 crc kubenswrapper[4937]: I0123 06:50:06.865396 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" podStartSLOduration=36.643868964 podStartE2EDuration="40.865385712s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:50:01.486522291 +0000 UTC m=+1001.290288954" lastFinishedPulling="2026-01-23 06:50:05.708039029 +0000 UTC m=+1005.511805702" observedRunningTime="2026-01-23 06:50:06.859078734 +0000 UTC m=+1006.662845387" watchObservedRunningTime="2026-01-23 06:50:06.865385712 +0000 UTC m=+1006.669152365" Jan 23 06:50:06 crc kubenswrapper[4937]: I0123 06:50:06.879914 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-c5c4f" Jan 23 06:50:06 crc kubenswrapper[4937]: I0123 06:50:06.890061 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf" podStartSLOduration=36.714595833 podStartE2EDuration="40.890046235s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:50:01.543207922 +0000 UTC m=+1001.346974575" lastFinishedPulling="2026-01-23 06:50:05.718658334 +0000 UTC m=+1005.522424977" observedRunningTime="2026-01-23 06:50:06.888913684 +0000 UTC m=+1006.692680337" watchObservedRunningTime="2026-01-23 06:50:06.890046235 +0000 UTC m=+1006.693812888" Jan 23 06:50:06 crc kubenswrapper[4937]: I0123 06:50:06.913590 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hgnzh" podStartSLOduration=32.728871058 podStartE2EDuration="35.913563206s" podCreationTimestamp="2026-01-23 06:49:31 +0000 UTC" firstStartedPulling="2026-01-23 06:50:02.514705638 +0000 UTC m=+1002.318472291" lastFinishedPulling="2026-01-23 06:50:05.699397786 +0000 UTC m=+1005.503164439" observedRunningTime="2026-01-23 06:50:06.909390664 +0000 UTC m=+1006.713157317" watchObservedRunningTime="2026-01-23 06:50:06.913563206 +0000 UTC m=+1006.717329859" Jan 23 06:50:07 crc kubenswrapper[4937]: I0123 06:50:07.127658 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xrgdc" Jan 23 06:50:07 crc kubenswrapper[4937]: I0123 06:50:07.127705 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-2rqc7" Jan 23 06:50:07 crc kubenswrapper[4937]: I0123 06:50:07.130120 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sd8vq" Jan 23 06:50:07 crc kubenswrapper[4937]: I0123 06:50:07.176790 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-pxrfl" Jan 23 06:50:07 crc kubenswrapper[4937]: I0123 06:50:07.188028 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-4cdbv" Jan 23 06:50:07 crc kubenswrapper[4937]: I0123 06:50:07.269815 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-99bb7" Jan 23 06:50:07 crc kubenswrapper[4937]: I0123 06:50:07.340916 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-9q4dg" Jan 23 06:50:07 crc kubenswrapper[4937]: I0123 06:50:07.396141 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5d9cd495bb-kw22f" Jan 23 06:50:07 crc kubenswrapper[4937]: I0123 06:50:07.458094 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-9xwkd" Jan 23 06:50:07 crc kubenswrapper[4937]: I0123 06:50:07.724553 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:50:07 crc kubenswrapper[4937]: I0123 06:50:07.725502 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:50:08 crc kubenswrapper[4937]: I0123 06:50:08.836075 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-wmhnh" event={"ID":"9d773aa4-667d-431e-b63f-0f4c45d22d58","Type":"ContainerStarted","Data":"2b256087922bd5147ec47009bf13e9541a3f51f4a4b993caf90dcad23a1bfe5e"} Jan 23 06:50:08 crc kubenswrapper[4937]: I0123 06:50:08.836652 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-wmhnh" Jan 23 06:50:08 crc kubenswrapper[4937]: I0123 06:50:08.851421 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-wmhnh" podStartSLOduration=3.1285464689999998 podStartE2EDuration="42.851405799s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:49:28.377663114 +0000 UTC m=+968.181429757" lastFinishedPulling="2026-01-23 06:50:08.100522434 +0000 UTC m=+1007.904289087" observedRunningTime="2026-01-23 06:50:08.848964213 +0000 UTC m=+1008.652730856" watchObservedRunningTime="2026-01-23 06:50:08.851405799 +0000 UTC m=+1008.655172452" Jan 23 06:50:11 crc kubenswrapper[4937]: I0123 06:50:11.863250 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qwg2k" event={"ID":"c3c7df6a-2e6c-4a57-b7b6-ed070b4eeb3a","Type":"ContainerStarted","Data":"8671538bf6a39bf024cd7e12949b8e285c87fdd47eabd73b2388d65559800c45"} Jan 23 06:50:11 crc kubenswrapper[4937]: I0123 06:50:11.863801 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qwg2k" Jan 23 06:50:11 crc kubenswrapper[4937]: I0123 06:50:11.879994 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qwg2k" podStartSLOduration=3.015096044 podStartE2EDuration="45.879974808s" podCreationTimestamp="2026-01-23 06:49:26 +0000 UTC" firstStartedPulling="2026-01-23 06:49:28.377235283 +0000 UTC m=+968.181001936" lastFinishedPulling="2026-01-23 06:50:11.242114047 +0000 UTC m=+1011.045880700" observedRunningTime="2026-01-23 06:50:11.879411352 +0000 UTC m=+1011.683178015" watchObservedRunningTime="2026-01-23 06:50:11.879974808 +0000 UTC m=+1011.683741461" Jan 23 06:50:11 crc kubenswrapper[4937]: I0123 06:50:11.958879 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hgnzh" Jan 23 06:50:11 crc kubenswrapper[4937]: I0123 06:50:11.959058 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hgnzh" Jan 23 06:50:12 crc kubenswrapper[4937]: I0123 06:50:12.022833 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hgnzh" Jan 23 06:50:12 crc kubenswrapper[4937]: I0123 06:50:12.784817 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-9kphf" Jan 23 06:50:12 crc kubenswrapper[4937]: I0123 06:50:12.874652 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7qvkg" event={"ID":"a58eade1-a27e-42ed-8a1b-f43803d53498","Type":"ContainerStarted","Data":"a3f80001573fe9cb78c0a58c281a1ff1848ab9a6525b62ca676f97eecc891658"} Jan 23 06:50:12 crc kubenswrapper[4937]: I0123 06:50:12.892795 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7qvkg" podStartSLOduration=2.549506226 podStartE2EDuration="45.892776722s" podCreationTimestamp="2026-01-23 06:49:27 +0000 UTC" firstStartedPulling="2026-01-23 06:49:28.64431485 +0000 UTC m=+968.448081503" lastFinishedPulling="2026-01-23 06:50:11.987585346 +0000 UTC m=+1011.791351999" observedRunningTime="2026-01-23 06:50:12.892007562 +0000 UTC m=+1012.695774215" watchObservedRunningTime="2026-01-23 06:50:12.892776722 +0000 UTC m=+1012.696543375" Jan 23 06:50:12 crc kubenswrapper[4937]: I0123 06:50:12.919249 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hgnzh" Jan 23 06:50:12 crc kubenswrapper[4937]: I0123 06:50:12.977981 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hgnzh"] Jan 23 06:50:13 crc kubenswrapper[4937]: I0123 06:50:13.115765 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k" Jan 23 06:50:13 crc kubenswrapper[4937]: I0123 06:50:13.373474 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-8599c9cdcc-j97fl" Jan 23 06:50:14 crc kubenswrapper[4937]: I0123 06:50:14.888705 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hgnzh" podUID="e5f42c4b-4eec-42eb-910f-9a5e1be37121" containerName="registry-server" containerID="cri-o://8ac99bcd6aff130bd9ba56c872aa16879c4c2edffada40a2ab4e3de52ee2522d" gracePeriod=2 Jan 23 06:50:16 crc kubenswrapper[4937]: I0123 06:50:16.900052 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-5bt2v" Jan 23 06:50:16 crc kubenswrapper[4937]: I0123 06:50:16.905759 4937 generic.go:334] "Generic (PLEG): container finished" podID="e5f42c4b-4eec-42eb-910f-9a5e1be37121" containerID="8ac99bcd6aff130bd9ba56c872aa16879c4c2edffada40a2ab4e3de52ee2522d" exitCode=0 Jan 23 06:50:16 crc kubenswrapper[4937]: I0123 06:50:16.905849 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgnzh" event={"ID":"e5f42c4b-4eec-42eb-910f-9a5e1be37121","Type":"ContainerDied","Data":"8ac99bcd6aff130bd9ba56c872aa16879c4c2edffada40a2ab4e3de52ee2522d"} Jan 23 06:50:17 crc kubenswrapper[4937]: I0123 06:50:17.138958 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-wmhnh" Jan 23 06:50:17 crc kubenswrapper[4937]: I0123 06:50:17.204013 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-qwg2k" Jan 23 06:50:17 crc kubenswrapper[4937]: I0123 06:50:17.473039 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgnzh" Jan 23 06:50:17 crc kubenswrapper[4937]: I0123 06:50:17.597471 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t28t4\" (UniqueName: \"kubernetes.io/projected/e5f42c4b-4eec-42eb-910f-9a5e1be37121-kube-api-access-t28t4\") pod \"e5f42c4b-4eec-42eb-910f-9a5e1be37121\" (UID: \"e5f42c4b-4eec-42eb-910f-9a5e1be37121\") " Jan 23 06:50:17 crc kubenswrapper[4937]: I0123 06:50:17.597587 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5f42c4b-4eec-42eb-910f-9a5e1be37121-catalog-content\") pod \"e5f42c4b-4eec-42eb-910f-9a5e1be37121\" (UID: \"e5f42c4b-4eec-42eb-910f-9a5e1be37121\") " Jan 23 06:50:17 crc kubenswrapper[4937]: I0123 06:50:17.597716 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5f42c4b-4eec-42eb-910f-9a5e1be37121-utilities\") pod \"e5f42c4b-4eec-42eb-910f-9a5e1be37121\" (UID: \"e5f42c4b-4eec-42eb-910f-9a5e1be37121\") " Jan 23 06:50:17 crc kubenswrapper[4937]: I0123 06:50:17.598627 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5f42c4b-4eec-42eb-910f-9a5e1be37121-utilities" (OuterVolumeSpecName: "utilities") pod "e5f42c4b-4eec-42eb-910f-9a5e1be37121" (UID: "e5f42c4b-4eec-42eb-910f-9a5e1be37121"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:50:17 crc kubenswrapper[4937]: I0123 06:50:17.603393 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f42c4b-4eec-42eb-910f-9a5e1be37121-kube-api-access-t28t4" (OuterVolumeSpecName: "kube-api-access-t28t4") pod "e5f42c4b-4eec-42eb-910f-9a5e1be37121" (UID: "e5f42c4b-4eec-42eb-910f-9a5e1be37121"). InnerVolumeSpecName "kube-api-access-t28t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:50:17 crc kubenswrapper[4937]: I0123 06:50:17.641883 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5f42c4b-4eec-42eb-910f-9a5e1be37121-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5f42c4b-4eec-42eb-910f-9a5e1be37121" (UID: "e5f42c4b-4eec-42eb-910f-9a5e1be37121"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:50:17 crc kubenswrapper[4937]: I0123 06:50:17.699023 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5f42c4b-4eec-42eb-910f-9a5e1be37121-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:50:17 crc kubenswrapper[4937]: I0123 06:50:17.699062 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t28t4\" (UniqueName: \"kubernetes.io/projected/e5f42c4b-4eec-42eb-910f-9a5e1be37121-kube-api-access-t28t4\") on node \"crc\" DevicePath \"\"" Jan 23 06:50:17 crc kubenswrapper[4937]: I0123 06:50:17.699074 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5f42c4b-4eec-42eb-910f-9a5e1be37121-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:50:17 crc kubenswrapper[4937]: I0123 06:50:17.914234 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgnzh" event={"ID":"e5f42c4b-4eec-42eb-910f-9a5e1be37121","Type":"ContainerDied","Data":"a25a7583d43cc5dcb6e835e05ad817dcf6dd6c2e39f364f4f92d08c0aadf12e0"} Jan 23 06:50:17 crc kubenswrapper[4937]: I0123 06:50:17.914297 4937 scope.go:117] "RemoveContainer" containerID="8ac99bcd6aff130bd9ba56c872aa16879c4c2edffada40a2ab4e3de52ee2522d" Jan 23 06:50:17 crc kubenswrapper[4937]: I0123 06:50:17.914327 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgnzh" Jan 23 06:50:17 crc kubenswrapper[4937]: I0123 06:50:17.940628 4937 scope.go:117] "RemoveContainer" containerID="9f7c743de5d48322e8ea48dc1c24b46b37339734469474e6507a45f7e23168d9" Jan 23 06:50:17 crc kubenswrapper[4937]: I0123 06:50:17.949271 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hgnzh"] Jan 23 06:50:17 crc kubenswrapper[4937]: I0123 06:50:17.970023 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hgnzh"] Jan 23 06:50:18 crc kubenswrapper[4937]: I0123 06:50:18.000748 4937 scope.go:117] "RemoveContainer" containerID="025e19659afbe14b7bd5f4f1137366b334deb3adc0da502878990653c57f875b" Jan 23 06:50:18 crc kubenswrapper[4937]: I0123 06:50:18.544691 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5f42c4b-4eec-42eb-910f-9a5e1be37121" path="/var/lib/kubelet/pods/e5f42c4b-4eec-42eb-910f-9a5e1be37121/volumes" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.690840 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68cdf6699c-257b4"] Jan 23 06:50:35 crc kubenswrapper[4937]: E0123 06:50:35.691888 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f42c4b-4eec-42eb-910f-9a5e1be37121" containerName="registry-server" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.691907 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f42c4b-4eec-42eb-910f-9a5e1be37121" containerName="registry-server" Jan 23 06:50:35 crc kubenswrapper[4937]: E0123 06:50:35.691931 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f42c4b-4eec-42eb-910f-9a5e1be37121" containerName="extract-content" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.691939 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f42c4b-4eec-42eb-910f-9a5e1be37121" containerName="extract-content" Jan 23 06:50:35 crc kubenswrapper[4937]: E0123 06:50:35.691956 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f42c4b-4eec-42eb-910f-9a5e1be37121" containerName="extract-utilities" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.691963 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f42c4b-4eec-42eb-910f-9a5e1be37121" containerName="extract-utilities" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.692147 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f42c4b-4eec-42eb-910f-9a5e1be37121" containerName="registry-server" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.693057 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68cdf6699c-257b4" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.700359 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.700677 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-4lhjt" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.700830 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.703337 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.705904 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68cdf6699c-257b4"] Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.793670 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fc5545f-c4rvb"] Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.802121 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fc5545f-c4rvb" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.804375 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.827425 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fc5545f-c4rvb"] Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.875162 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw2qk\" (UniqueName: \"kubernetes.io/projected/de9f1ace-5aaf-42ed-b6ad-906825c2fb34-kube-api-access-gw2qk\") pod \"dnsmasq-dns-68cdf6699c-257b4\" (UID: \"de9f1ace-5aaf-42ed-b6ad-906825c2fb34\") " pod="openstack/dnsmasq-dns-68cdf6699c-257b4" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.875218 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de9f1ace-5aaf-42ed-b6ad-906825c2fb34-config\") pod \"dnsmasq-dns-68cdf6699c-257b4\" (UID: \"de9f1ace-5aaf-42ed-b6ad-906825c2fb34\") " pod="openstack/dnsmasq-dns-68cdf6699c-257b4" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.976224 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a99c63c6-94a6-4a26-ab33-e492d6644b25-config\") pod \"dnsmasq-dns-55fc5545f-c4rvb\" (UID: \"a99c63c6-94a6-4a26-ab33-e492d6644b25\") " pod="openstack/dnsmasq-dns-55fc5545f-c4rvb" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.976284 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw2qk\" (UniqueName: \"kubernetes.io/projected/de9f1ace-5aaf-42ed-b6ad-906825c2fb34-kube-api-access-gw2qk\") pod \"dnsmasq-dns-68cdf6699c-257b4\" (UID: \"de9f1ace-5aaf-42ed-b6ad-906825c2fb34\") " pod="openstack/dnsmasq-dns-68cdf6699c-257b4" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.976327 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de9f1ace-5aaf-42ed-b6ad-906825c2fb34-config\") pod \"dnsmasq-dns-68cdf6699c-257b4\" (UID: \"de9f1ace-5aaf-42ed-b6ad-906825c2fb34\") " pod="openstack/dnsmasq-dns-68cdf6699c-257b4" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.976397 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6wvr\" (UniqueName: \"kubernetes.io/projected/a99c63c6-94a6-4a26-ab33-e492d6644b25-kube-api-access-g6wvr\") pod \"dnsmasq-dns-55fc5545f-c4rvb\" (UID: \"a99c63c6-94a6-4a26-ab33-e492d6644b25\") " pod="openstack/dnsmasq-dns-55fc5545f-c4rvb" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.976449 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a99c63c6-94a6-4a26-ab33-e492d6644b25-dns-svc\") pod \"dnsmasq-dns-55fc5545f-c4rvb\" (UID: \"a99c63c6-94a6-4a26-ab33-e492d6644b25\") " pod="openstack/dnsmasq-dns-55fc5545f-c4rvb" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.977469 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de9f1ace-5aaf-42ed-b6ad-906825c2fb34-config\") pod \"dnsmasq-dns-68cdf6699c-257b4\" (UID: \"de9f1ace-5aaf-42ed-b6ad-906825c2fb34\") " pod="openstack/dnsmasq-dns-68cdf6699c-257b4" Jan 23 06:50:35 crc kubenswrapper[4937]: I0123 06:50:35.995895 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw2qk\" (UniqueName: \"kubernetes.io/projected/de9f1ace-5aaf-42ed-b6ad-906825c2fb34-kube-api-access-gw2qk\") pod \"dnsmasq-dns-68cdf6699c-257b4\" (UID: \"de9f1ace-5aaf-42ed-b6ad-906825c2fb34\") " pod="openstack/dnsmasq-dns-68cdf6699c-257b4" Jan 23 06:50:36 crc kubenswrapper[4937]: I0123 06:50:36.028109 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68cdf6699c-257b4" Jan 23 06:50:36 crc kubenswrapper[4937]: I0123 06:50:36.077464 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6wvr\" (UniqueName: \"kubernetes.io/projected/a99c63c6-94a6-4a26-ab33-e492d6644b25-kube-api-access-g6wvr\") pod \"dnsmasq-dns-55fc5545f-c4rvb\" (UID: \"a99c63c6-94a6-4a26-ab33-e492d6644b25\") " pod="openstack/dnsmasq-dns-55fc5545f-c4rvb" Jan 23 06:50:36 crc kubenswrapper[4937]: I0123 06:50:36.077872 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a99c63c6-94a6-4a26-ab33-e492d6644b25-dns-svc\") pod \"dnsmasq-dns-55fc5545f-c4rvb\" (UID: \"a99c63c6-94a6-4a26-ab33-e492d6644b25\") " pod="openstack/dnsmasq-dns-55fc5545f-c4rvb" Jan 23 06:50:36 crc kubenswrapper[4937]: I0123 06:50:36.077904 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a99c63c6-94a6-4a26-ab33-e492d6644b25-config\") pod \"dnsmasq-dns-55fc5545f-c4rvb\" (UID: \"a99c63c6-94a6-4a26-ab33-e492d6644b25\") " pod="openstack/dnsmasq-dns-55fc5545f-c4rvb" Jan 23 06:50:36 crc kubenswrapper[4937]: I0123 06:50:36.078715 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a99c63c6-94a6-4a26-ab33-e492d6644b25-dns-svc\") pod \"dnsmasq-dns-55fc5545f-c4rvb\" (UID: \"a99c63c6-94a6-4a26-ab33-e492d6644b25\") " pod="openstack/dnsmasq-dns-55fc5545f-c4rvb" Jan 23 06:50:36 crc kubenswrapper[4937]: I0123 06:50:36.078997 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a99c63c6-94a6-4a26-ab33-e492d6644b25-config\") pod \"dnsmasq-dns-55fc5545f-c4rvb\" (UID: \"a99c63c6-94a6-4a26-ab33-e492d6644b25\") " pod="openstack/dnsmasq-dns-55fc5545f-c4rvb" Jan 23 06:50:36 crc kubenswrapper[4937]: I0123 06:50:36.097274 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6wvr\" (UniqueName: \"kubernetes.io/projected/a99c63c6-94a6-4a26-ab33-e492d6644b25-kube-api-access-g6wvr\") pod \"dnsmasq-dns-55fc5545f-c4rvb\" (UID: \"a99c63c6-94a6-4a26-ab33-e492d6644b25\") " pod="openstack/dnsmasq-dns-55fc5545f-c4rvb" Jan 23 06:50:36 crc kubenswrapper[4937]: I0123 06:50:36.117922 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fc5545f-c4rvb" Jan 23 06:50:36 crc kubenswrapper[4937]: I0123 06:50:36.257891 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68cdf6699c-257b4"] Jan 23 06:50:36 crc kubenswrapper[4937]: W0123 06:50:36.567963 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda99c63c6_94a6_4a26_ab33_e492d6644b25.slice/crio-717567e9398832d1967e061a4928c871ad4ca4289d87f6df47175e50c49a6594 WatchSource:0}: Error finding container 717567e9398832d1967e061a4928c871ad4ca4289d87f6df47175e50c49a6594: Status 404 returned error can't find the container with id 717567e9398832d1967e061a4928c871ad4ca4289d87f6df47175e50c49a6594 Jan 23 06:50:36 crc kubenswrapper[4937]: I0123 06:50:36.568620 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fc5545f-c4rvb"] Jan 23 06:50:37 crc kubenswrapper[4937]: I0123 06:50:37.059097 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fc5545f-c4rvb" event={"ID":"a99c63c6-94a6-4a26-ab33-e492d6644b25","Type":"ContainerStarted","Data":"717567e9398832d1967e061a4928c871ad4ca4289d87f6df47175e50c49a6594"} Jan 23 06:50:37 crc kubenswrapper[4937]: I0123 06:50:37.061558 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68cdf6699c-257b4" event={"ID":"de9f1ace-5aaf-42ed-b6ad-906825c2fb34","Type":"ContainerStarted","Data":"c587d520ac6239473c0f6310d9e9f79e550780efb73e72ba6fbec5567400bb1c"} Jan 23 06:50:37 crc kubenswrapper[4937]: I0123 06:50:37.724732 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:50:37 crc kubenswrapper[4937]: I0123 06:50:37.725256 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.495147 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fc5545f-c4rvb"] Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.522035 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff6ccbbbc-s2msr"] Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.523735 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff6ccbbbc-s2msr" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.535618 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff6ccbbbc-s2msr"] Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.633242 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d2xc\" (UniqueName: \"kubernetes.io/projected/605c07a2-d595-45ed-924f-6ff0d6cd1eb0-kube-api-access-2d2xc\") pod \"dnsmasq-dns-7ff6ccbbbc-s2msr\" (UID: \"605c07a2-d595-45ed-924f-6ff0d6cd1eb0\") " pod="openstack/dnsmasq-dns-7ff6ccbbbc-s2msr" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.633303 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/605c07a2-d595-45ed-924f-6ff0d6cd1eb0-dns-svc\") pod \"dnsmasq-dns-7ff6ccbbbc-s2msr\" (UID: \"605c07a2-d595-45ed-924f-6ff0d6cd1eb0\") " pod="openstack/dnsmasq-dns-7ff6ccbbbc-s2msr" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.633335 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/605c07a2-d595-45ed-924f-6ff0d6cd1eb0-config\") pod \"dnsmasq-dns-7ff6ccbbbc-s2msr\" (UID: \"605c07a2-d595-45ed-924f-6ff0d6cd1eb0\") " pod="openstack/dnsmasq-dns-7ff6ccbbbc-s2msr" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.734150 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d2xc\" (UniqueName: \"kubernetes.io/projected/605c07a2-d595-45ed-924f-6ff0d6cd1eb0-kube-api-access-2d2xc\") pod \"dnsmasq-dns-7ff6ccbbbc-s2msr\" (UID: \"605c07a2-d595-45ed-924f-6ff0d6cd1eb0\") " pod="openstack/dnsmasq-dns-7ff6ccbbbc-s2msr" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.734221 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/605c07a2-d595-45ed-924f-6ff0d6cd1eb0-dns-svc\") pod \"dnsmasq-dns-7ff6ccbbbc-s2msr\" (UID: \"605c07a2-d595-45ed-924f-6ff0d6cd1eb0\") " pod="openstack/dnsmasq-dns-7ff6ccbbbc-s2msr" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.734263 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/605c07a2-d595-45ed-924f-6ff0d6cd1eb0-config\") pod \"dnsmasq-dns-7ff6ccbbbc-s2msr\" (UID: \"605c07a2-d595-45ed-924f-6ff0d6cd1eb0\") " pod="openstack/dnsmasq-dns-7ff6ccbbbc-s2msr" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.735286 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/605c07a2-d595-45ed-924f-6ff0d6cd1eb0-dns-svc\") pod \"dnsmasq-dns-7ff6ccbbbc-s2msr\" (UID: \"605c07a2-d595-45ed-924f-6ff0d6cd1eb0\") " pod="openstack/dnsmasq-dns-7ff6ccbbbc-s2msr" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.735420 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/605c07a2-d595-45ed-924f-6ff0d6cd1eb0-config\") pod \"dnsmasq-dns-7ff6ccbbbc-s2msr\" (UID: \"605c07a2-d595-45ed-924f-6ff0d6cd1eb0\") " pod="openstack/dnsmasq-dns-7ff6ccbbbc-s2msr" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.759855 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d2xc\" (UniqueName: \"kubernetes.io/projected/605c07a2-d595-45ed-924f-6ff0d6cd1eb0-kube-api-access-2d2xc\") pod \"dnsmasq-dns-7ff6ccbbbc-s2msr\" (UID: \"605c07a2-d595-45ed-924f-6ff0d6cd1eb0\") " pod="openstack/dnsmasq-dns-7ff6ccbbbc-s2msr" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.778535 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68cdf6699c-257b4"] Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.809069 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-db4cc579f-z4cj4"] Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.810109 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-db4cc579f-z4cj4" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.828384 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-db4cc579f-z4cj4"] Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.836362 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03-dns-svc\") pod \"dnsmasq-dns-db4cc579f-z4cj4\" (UID: \"02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03\") " pod="openstack/dnsmasq-dns-db4cc579f-z4cj4" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.836453 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldwzj\" (UniqueName: \"kubernetes.io/projected/02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03-kube-api-access-ldwzj\") pod \"dnsmasq-dns-db4cc579f-z4cj4\" (UID: \"02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03\") " pod="openstack/dnsmasq-dns-db4cc579f-z4cj4" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.836474 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03-config\") pod \"dnsmasq-dns-db4cc579f-z4cj4\" (UID: \"02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03\") " pod="openstack/dnsmasq-dns-db4cc579f-z4cj4" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.842318 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff6ccbbbc-s2msr" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.938203 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03-dns-svc\") pod \"dnsmasq-dns-db4cc579f-z4cj4\" (UID: \"02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03\") " pod="openstack/dnsmasq-dns-db4cc579f-z4cj4" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.938289 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldwzj\" (UniqueName: \"kubernetes.io/projected/02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03-kube-api-access-ldwzj\") pod \"dnsmasq-dns-db4cc579f-z4cj4\" (UID: \"02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03\") " pod="openstack/dnsmasq-dns-db4cc579f-z4cj4" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.938312 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03-config\") pod \"dnsmasq-dns-db4cc579f-z4cj4\" (UID: \"02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03\") " pod="openstack/dnsmasq-dns-db4cc579f-z4cj4" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.939371 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03-dns-svc\") pod \"dnsmasq-dns-db4cc579f-z4cj4\" (UID: \"02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03\") " pod="openstack/dnsmasq-dns-db4cc579f-z4cj4" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.939472 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03-config\") pod \"dnsmasq-dns-db4cc579f-z4cj4\" (UID: \"02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03\") " pod="openstack/dnsmasq-dns-db4cc579f-z4cj4" Jan 23 06:50:39 crc kubenswrapper[4937]: I0123 06:50:39.963627 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldwzj\" (UniqueName: \"kubernetes.io/projected/02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03-kube-api-access-ldwzj\") pod \"dnsmasq-dns-db4cc579f-z4cj4\" (UID: \"02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03\") " pod="openstack/dnsmasq-dns-db4cc579f-z4cj4" Jan 23 06:50:40 crc kubenswrapper[4937]: I0123 06:50:40.082465 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff6ccbbbc-s2msr"] Jan 23 06:50:40 crc kubenswrapper[4937]: I0123 06:50:40.104021 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fb9cb7945-t6mcj"] Jan 23 06:50:40 crc kubenswrapper[4937]: I0123 06:50:40.105194 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb9cb7945-t6mcj" Jan 23 06:50:40 crc kubenswrapper[4937]: I0123 06:50:40.121431 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fb9cb7945-t6mcj"] Jan 23 06:50:40 crc kubenswrapper[4937]: I0123 06:50:40.126154 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-db4cc579f-z4cj4" Jan 23 06:50:40 crc kubenswrapper[4937]: I0123 06:50:40.141512 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffd88816-be2b-4c27-a5f5-a061d44c8e63-config\") pod \"dnsmasq-dns-5fb9cb7945-t6mcj\" (UID: \"ffd88816-be2b-4c27-a5f5-a061d44c8e63\") " pod="openstack/dnsmasq-dns-5fb9cb7945-t6mcj" Jan 23 06:50:40 crc kubenswrapper[4937]: I0123 06:50:40.141582 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr2dj\" (UniqueName: \"kubernetes.io/projected/ffd88816-be2b-4c27-a5f5-a061d44c8e63-kube-api-access-jr2dj\") pod \"dnsmasq-dns-5fb9cb7945-t6mcj\" (UID: \"ffd88816-be2b-4c27-a5f5-a061d44c8e63\") " pod="openstack/dnsmasq-dns-5fb9cb7945-t6mcj" Jan 23 06:50:40 crc kubenswrapper[4937]: I0123 06:50:40.141697 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffd88816-be2b-4c27-a5f5-a061d44c8e63-dns-svc\") pod \"dnsmasq-dns-5fb9cb7945-t6mcj\" (UID: \"ffd88816-be2b-4c27-a5f5-a061d44c8e63\") " pod="openstack/dnsmasq-dns-5fb9cb7945-t6mcj" Jan 23 06:50:40 crc kubenswrapper[4937]: I0123 06:50:40.200722 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff6ccbbbc-s2msr"] Jan 23 06:50:40 crc kubenswrapper[4937]: W0123 06:50:40.213214 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod605c07a2_d595_45ed_924f_6ff0d6cd1eb0.slice/crio-cb73c5911ee9b7f71430eeb7f22ad5b457bd7815c1b20508c95fdff314de88e9 WatchSource:0}: Error finding container cb73c5911ee9b7f71430eeb7f22ad5b457bd7815c1b20508c95fdff314de88e9: Status 404 returned error can't find the container with id cb73c5911ee9b7f71430eeb7f22ad5b457bd7815c1b20508c95fdff314de88e9 Jan 23 06:50:40 crc kubenswrapper[4937]: I0123 06:50:40.242834 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr2dj\" (UniqueName: \"kubernetes.io/projected/ffd88816-be2b-4c27-a5f5-a061d44c8e63-kube-api-access-jr2dj\") pod \"dnsmasq-dns-5fb9cb7945-t6mcj\" (UID: \"ffd88816-be2b-4c27-a5f5-a061d44c8e63\") " pod="openstack/dnsmasq-dns-5fb9cb7945-t6mcj" Jan 23 06:50:40 crc kubenswrapper[4937]: I0123 06:50:40.243116 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffd88816-be2b-4c27-a5f5-a061d44c8e63-dns-svc\") pod \"dnsmasq-dns-5fb9cb7945-t6mcj\" (UID: \"ffd88816-be2b-4c27-a5f5-a061d44c8e63\") " pod="openstack/dnsmasq-dns-5fb9cb7945-t6mcj" Jan 23 06:50:40 crc kubenswrapper[4937]: I0123 06:50:40.243156 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffd88816-be2b-4c27-a5f5-a061d44c8e63-config\") pod \"dnsmasq-dns-5fb9cb7945-t6mcj\" (UID: \"ffd88816-be2b-4c27-a5f5-a061d44c8e63\") " pod="openstack/dnsmasq-dns-5fb9cb7945-t6mcj" Jan 23 06:50:40 crc kubenswrapper[4937]: I0123 06:50:40.244638 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffd88816-be2b-4c27-a5f5-a061d44c8e63-dns-svc\") pod \"dnsmasq-dns-5fb9cb7945-t6mcj\" (UID: \"ffd88816-be2b-4c27-a5f5-a061d44c8e63\") " pod="openstack/dnsmasq-dns-5fb9cb7945-t6mcj" Jan 23 06:50:40 crc kubenswrapper[4937]: I0123 06:50:40.246780 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffd88816-be2b-4c27-a5f5-a061d44c8e63-config\") pod \"dnsmasq-dns-5fb9cb7945-t6mcj\" (UID: \"ffd88816-be2b-4c27-a5f5-a061d44c8e63\") " pod="openstack/dnsmasq-dns-5fb9cb7945-t6mcj" Jan 23 06:50:40 crc kubenswrapper[4937]: I0123 06:50:40.264019 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr2dj\" (UniqueName: \"kubernetes.io/projected/ffd88816-be2b-4c27-a5f5-a061d44c8e63-kube-api-access-jr2dj\") pod \"dnsmasq-dns-5fb9cb7945-t6mcj\" (UID: \"ffd88816-be2b-4c27-a5f5-a061d44c8e63\") " pod="openstack/dnsmasq-dns-5fb9cb7945-t6mcj" Jan 23 06:50:40 crc kubenswrapper[4937]: I0123 06:50:40.378726 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-db4cc579f-z4cj4"] Jan 23 06:50:40 crc kubenswrapper[4937]: I0123 06:50:40.427903 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb9cb7945-t6mcj" Jan 23 06:50:40 crc kubenswrapper[4937]: I0123 06:50:40.637019 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fb9cb7945-t6mcj"] Jan 23 06:50:40 crc kubenswrapper[4937]: W0123 06:50:40.638929 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffd88816_be2b_4c27_a5f5_a061d44c8e63.slice/crio-7a91ffed56891a5fb54446589fcd01ad26e15cde4dd8758e3decf607a3611bd8 WatchSource:0}: Error finding container 7a91ffed56891a5fb54446589fcd01ad26e15cde4dd8758e3decf607a3611bd8: Status 404 returned error can't find the container with id 7a91ffed56891a5fb54446589fcd01ad26e15cde4dd8758e3decf607a3611bd8 Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.091992 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff6ccbbbc-s2msr" event={"ID":"605c07a2-d595-45ed-924f-6ff0d6cd1eb0","Type":"ContainerStarted","Data":"cb73c5911ee9b7f71430eeb7f22ad5b457bd7815c1b20508c95fdff314de88e9"} Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.092774 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb9cb7945-t6mcj" event={"ID":"ffd88816-be2b-4c27-a5f5-a061d44c8e63","Type":"ContainerStarted","Data":"7a91ffed56891a5fb54446589fcd01ad26e15cde4dd8758e3decf607a3611bd8"} Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.093652 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-db4cc579f-z4cj4" event={"ID":"02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03","Type":"ContainerStarted","Data":"3f9b293bfd69784b69a9623de0bfabd752aee8148569a48109ddadcc49b23b1c"} Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.793026 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.795026 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.799689 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.799876 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.800021 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.800144 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.800321 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.800466 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.800633 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-rzll5" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.805244 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.806561 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.809871 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.810070 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.810433 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.810878 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.811018 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-f6wcq" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.817226 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.817419 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.817426 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.818667 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.820938 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.820958 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.821147 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.821172 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.821265 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.821399 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.822115 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-r4ghn" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.822248 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.828739 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.863757 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966077 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966156 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c22daa68-7c34-4180-adcc-d939bfa5a607-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966189 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg4dh\" (UniqueName: \"kubernetes.io/projected/c22daa68-7c34-4180-adcc-d939bfa5a607-kube-api-access-fg4dh\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966220 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966236 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966256 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966270 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966286 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966302 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvkj5\" (UniqueName: \"kubernetes.io/projected/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-kube-api-access-hvkj5\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966317 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966331 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966358 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c22daa68-7c34-4180-adcc-d939bfa5a607-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966372 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966393 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966410 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966426 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966443 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966460 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c22daa68-7c34-4180-adcc-d939bfa5a607-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966477 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966664 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c22daa68-7c34-4180-adcc-d939bfa5a607-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966726 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966764 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c22daa68-7c34-4180-adcc-d939bfa5a607-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966781 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966803 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966817 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966837 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966860 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966879 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966900 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966916 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vkpt\" (UniqueName: \"kubernetes.io/projected/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-kube-api-access-5vkpt\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966931 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-config-data\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966949 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:41 crc kubenswrapper[4937]: I0123 06:50:41.966967 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.068872 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069020 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069098 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069117 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069133 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069152 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvkj5\" (UniqueName: \"kubernetes.io/projected/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-kube-api-access-hvkj5\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069170 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069183 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069207 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c22daa68-7c34-4180-adcc-d939bfa5a607-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069221 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069243 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069262 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069277 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069294 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069312 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c22daa68-7c34-4180-adcc-d939bfa5a607-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069326 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069342 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c22daa68-7c34-4180-adcc-d939bfa5a607-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069361 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069379 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c22daa68-7c34-4180-adcc-d939bfa5a607-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069398 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069421 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069436 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069460 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.069481 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.070266 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.070618 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.070639 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.070965 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.071071 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.071302 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.071561 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.071867 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c22daa68-7c34-4180-adcc-d939bfa5a607-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.071894 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c22daa68-7c34-4180-adcc-d939bfa5a607-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.072141 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c22daa68-7c34-4180-adcc-d939bfa5a607-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.072150 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.072223 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.072894 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.072933 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.072949 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vkpt\" (UniqueName: \"kubernetes.io/projected/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-kube-api-access-5vkpt\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.072964 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-config-data\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.072983 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.073005 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.073035 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.073061 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c22daa68-7c34-4180-adcc-d939bfa5a607-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.073077 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg4dh\" (UniqueName: \"kubernetes.io/projected/c22daa68-7c34-4180-adcc-d939bfa5a607-kube-api-access-fg4dh\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.074057 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.074649 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-config-data\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.074912 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.075927 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.076146 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.092066 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.093855 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.098386 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.099260 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.099713 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c22daa68-7c34-4180-adcc-d939bfa5a607-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.104306 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.104471 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.105339 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.107132 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.116705 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.126850 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.127277 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvkj5\" (UniqueName: \"kubernetes.io/projected/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-kube-api-access-hvkj5\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.132178 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vkpt\" (UniqueName: \"kubernetes.io/projected/4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e-kube-api-access-5vkpt\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.132195 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg4dh\" (UniqueName: \"kubernetes.io/projected/c22daa68-7c34-4180-adcc-d939bfa5a607-kube-api-access-fg4dh\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.132727 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c22daa68-7c34-4180-adcc-d939bfa5a607-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.144463 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.166552 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.177905 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.178718 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.435207 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.458820 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.474768 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.590956 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.592346 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.594557 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.594730 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.594986 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-svvgx" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.600360 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.602331 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.612266 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.691416 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42vkb\" (UniqueName: \"kubernetes.io/projected/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-kube-api-access-42vkb\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.691776 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.691808 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-config-data-default\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.691879 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.691944 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.691968 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.691998 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.692465 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-kolla-config\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.794756 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.794812 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.794848 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.794898 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-kolla-config\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.794950 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42vkb\" (UniqueName: \"kubernetes.io/projected/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-kube-api-access-42vkb\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.794976 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.795005 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-config-data-default\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.795040 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.795880 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.796173 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.796904 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-config-data-default\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.797335 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-kolla-config\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.797695 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.800482 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.812716 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.812790 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42vkb\" (UniqueName: \"kubernetes.io/projected/cdd3a96c-6f65-4f39-b435-78f7ceed08b5-kube-api-access-42vkb\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.832841 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"cdd3a96c-6f65-4f39-b435-78f7ceed08b5\") " pod="openstack/openstack-galera-0" Jan 23 06:50:42 crc kubenswrapper[4937]: I0123 06:50:42.926110 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 06:50:43 crc kubenswrapper[4937]: I0123 06:50:43.052044 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 06:50:43 crc kubenswrapper[4937]: I0123 06:50:43.060264 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 06:50:43 crc kubenswrapper[4937]: W0123 06:50:43.072003 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc22daa68_7c34_4180_adcc_d939bfa5a607.slice/crio-35f6874d7aeb8b91ed1191607f1917e454c9bdda2d998b2b3b22f1efffeeaf70 WatchSource:0}: Error finding container 35f6874d7aeb8b91ed1191607f1917e454c9bdda2d998b2b3b22f1efffeeaf70: Status 404 returned error can't find the container with id 35f6874d7aeb8b91ed1191607f1917e454c9bdda2d998b2b3b22f1efffeeaf70 Jan 23 06:50:43 crc kubenswrapper[4937]: I0123 06:50:43.115273 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c22daa68-7c34-4180-adcc-d939bfa5a607","Type":"ContainerStarted","Data":"35f6874d7aeb8b91ed1191607f1917e454c9bdda2d998b2b3b22f1efffeeaf70"} Jan 23 06:50:43 crc kubenswrapper[4937]: I0123 06:50:43.124953 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6","Type":"ContainerStarted","Data":"d9ace14a6134c1dc077ceeaab231eb84e4c39cf1a83e76b15f050bb96fb8da56"} Jan 23 06:50:43 crc kubenswrapper[4937]: I0123 06:50:43.155013 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 23 06:50:43 crc kubenswrapper[4937]: W0123 06:50:43.168142 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f26efb9_1fb5_49cf_a9b1_077aa91f3e7e.slice/crio-f719fdba431bc8d934b829179bef2519bd4e98b593e14caa38d8b7bb4dde26c6 WatchSource:0}: Error finding container f719fdba431bc8d934b829179bef2519bd4e98b593e14caa38d8b7bb4dde26c6: Status 404 returned error can't find the container with id f719fdba431bc8d934b829179bef2519bd4e98b593e14caa38d8b7bb4dde26c6 Jan 23 06:50:43 crc kubenswrapper[4937]: I0123 06:50:43.476811 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 06:50:43 crc kubenswrapper[4937]: W0123 06:50:43.500696 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdd3a96c_6f65_4f39_b435_78f7ceed08b5.slice/crio-91a7a390b636197011306204a3e537db304245c4edbb72650766ff27628a3e66 WatchSource:0}: Error finding container 91a7a390b636197011306204a3e537db304245c4edbb72650766ff27628a3e66: Status 404 returned error can't find the container with id 91a7a390b636197011306204a3e537db304245c4edbb72650766ff27628a3e66 Jan 23 06:50:43 crc kubenswrapper[4937]: I0123 06:50:43.998629 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.002484 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.006725 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-g9lk4" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.007009 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.008685 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.011525 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.015357 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.122879 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zxvc\" (UniqueName: \"kubernetes.io/projected/f256fcd3-0094-4316-acac-5cc6424f12d0-kube-api-access-2zxvc\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.122936 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.123061 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f256fcd3-0094-4316-acac-5cc6424f12d0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.123302 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f256fcd3-0094-4316-acac-5cc6424f12d0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.123344 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f256fcd3-0094-4316-acac-5cc6424f12d0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.123418 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f256fcd3-0094-4316-acac-5cc6424f12d0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.123451 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f256fcd3-0094-4316-acac-5cc6424f12d0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.123486 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f256fcd3-0094-4316-acac-5cc6424f12d0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.132948 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e","Type":"ContainerStarted","Data":"f719fdba431bc8d934b829179bef2519bd4e98b593e14caa38d8b7bb4dde26c6"} Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.134890 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cdd3a96c-6f65-4f39-b435-78f7ceed08b5","Type":"ContainerStarted","Data":"91a7a390b636197011306204a3e537db304245c4edbb72650766ff27628a3e66"} Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.228391 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f256fcd3-0094-4316-acac-5cc6424f12d0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.228549 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f256fcd3-0094-4316-acac-5cc6424f12d0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.228817 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f256fcd3-0094-4316-acac-5cc6424f12d0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.228888 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f256fcd3-0094-4316-acac-5cc6424f12d0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.228974 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f256fcd3-0094-4316-acac-5cc6424f12d0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.229232 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zxvc\" (UniqueName: \"kubernetes.io/projected/f256fcd3-0094-4316-acac-5cc6424f12d0-kube-api-access-2zxvc\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.229278 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f256fcd3-0094-4316-acac-5cc6424f12d0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.229302 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.229620 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.234199 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f256fcd3-0094-4316-acac-5cc6424f12d0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.234656 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f256fcd3-0094-4316-acac-5cc6424f12d0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.235645 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f256fcd3-0094-4316-acac-5cc6424f12d0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.236425 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f256fcd3-0094-4316-acac-5cc6424f12d0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.251853 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f256fcd3-0094-4316-acac-5cc6424f12d0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.253205 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f256fcd3-0094-4316-acac-5cc6424f12d0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.260516 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zxvc\" (UniqueName: \"kubernetes.io/projected/f256fcd3-0094-4316-acac-5cc6424f12d0-kube-api-access-2zxvc\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.277476 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f256fcd3-0094-4316-acac-5cc6424f12d0\") " pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.336711 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.708587 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.709447 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.713938 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-hgdzb" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.713969 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.714151 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.720443 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.882107 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4665ea5d-7191-40c4-bd96-5c1b48cf97a2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4665ea5d-7191-40c4-bd96-5c1b48cf97a2\") " pod="openstack/memcached-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.882274 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4665ea5d-7191-40c4-bd96-5c1b48cf97a2-kolla-config\") pod \"memcached-0\" (UID: \"4665ea5d-7191-40c4-bd96-5c1b48cf97a2\") " pod="openstack/memcached-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.882343 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4665ea5d-7191-40c4-bd96-5c1b48cf97a2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4665ea5d-7191-40c4-bd96-5c1b48cf97a2\") " pod="openstack/memcached-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.882364 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p828p\" (UniqueName: \"kubernetes.io/projected/4665ea5d-7191-40c4-bd96-5c1b48cf97a2-kube-api-access-p828p\") pod \"memcached-0\" (UID: \"4665ea5d-7191-40c4-bd96-5c1b48cf97a2\") " pod="openstack/memcached-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.882378 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4665ea5d-7191-40c4-bd96-5c1b48cf97a2-config-data\") pod \"memcached-0\" (UID: \"4665ea5d-7191-40c4-bd96-5c1b48cf97a2\") " pod="openstack/memcached-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.984099 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4665ea5d-7191-40c4-bd96-5c1b48cf97a2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4665ea5d-7191-40c4-bd96-5c1b48cf97a2\") " pod="openstack/memcached-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.984160 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4665ea5d-7191-40c4-bd96-5c1b48cf97a2-kolla-config\") pod \"memcached-0\" (UID: \"4665ea5d-7191-40c4-bd96-5c1b48cf97a2\") " pod="openstack/memcached-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.984295 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4665ea5d-7191-40c4-bd96-5c1b48cf97a2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4665ea5d-7191-40c4-bd96-5c1b48cf97a2\") " pod="openstack/memcached-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.984331 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p828p\" (UniqueName: \"kubernetes.io/projected/4665ea5d-7191-40c4-bd96-5c1b48cf97a2-kube-api-access-p828p\") pod \"memcached-0\" (UID: \"4665ea5d-7191-40c4-bd96-5c1b48cf97a2\") " pod="openstack/memcached-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.984360 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4665ea5d-7191-40c4-bd96-5c1b48cf97a2-config-data\") pod \"memcached-0\" (UID: \"4665ea5d-7191-40c4-bd96-5c1b48cf97a2\") " pod="openstack/memcached-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.985571 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4665ea5d-7191-40c4-bd96-5c1b48cf97a2-kolla-config\") pod \"memcached-0\" (UID: \"4665ea5d-7191-40c4-bd96-5c1b48cf97a2\") " pod="openstack/memcached-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.985706 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4665ea5d-7191-40c4-bd96-5c1b48cf97a2-config-data\") pod \"memcached-0\" (UID: \"4665ea5d-7191-40c4-bd96-5c1b48cf97a2\") " pod="openstack/memcached-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.994517 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/4665ea5d-7191-40c4-bd96-5c1b48cf97a2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"4665ea5d-7191-40c4-bd96-5c1b48cf97a2\") " pod="openstack/memcached-0" Jan 23 06:50:44 crc kubenswrapper[4937]: I0123 06:50:44.994702 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4665ea5d-7191-40c4-bd96-5c1b48cf97a2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"4665ea5d-7191-40c4-bd96-5c1b48cf97a2\") " pod="openstack/memcached-0" Jan 23 06:50:45 crc kubenswrapper[4937]: I0123 06:50:45.010730 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p828p\" (UniqueName: \"kubernetes.io/projected/4665ea5d-7191-40c4-bd96-5c1b48cf97a2-kube-api-access-p828p\") pod \"memcached-0\" (UID: \"4665ea5d-7191-40c4-bd96-5c1b48cf97a2\") " pod="openstack/memcached-0" Jan 23 06:50:45 crc kubenswrapper[4937]: I0123 06:50:45.049361 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 06:50:45 crc kubenswrapper[4937]: I0123 06:50:45.106939 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 06:50:45 crc kubenswrapper[4937]: I0123 06:50:45.181624 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f256fcd3-0094-4316-acac-5cc6424f12d0","Type":"ContainerStarted","Data":"c8fab01366f06a79ad897847f5a0097478e3cc91fd2de950817f84730ba17e9a"} Jan 23 06:50:45 crc kubenswrapper[4937]: I0123 06:50:45.829970 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 06:50:45 crc kubenswrapper[4937]: W0123 06:50:45.842632 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4665ea5d_7191_40c4_bd96_5c1b48cf97a2.slice/crio-907d9b28ce36e8b40c8fa8b5f8b65e4d3bdbe7ad737d8bf18d1b46426997e969 WatchSource:0}: Error finding container 907d9b28ce36e8b40c8fa8b5f8b65e4d3bdbe7ad737d8bf18d1b46426997e969: Status 404 returned error can't find the container with id 907d9b28ce36e8b40c8fa8b5f8b65e4d3bdbe7ad737d8bf18d1b46426997e969 Jan 23 06:50:46 crc kubenswrapper[4937]: I0123 06:50:46.218549 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4665ea5d-7191-40c4-bd96-5c1b48cf97a2","Type":"ContainerStarted","Data":"907d9b28ce36e8b40c8fa8b5f8b65e4d3bdbe7ad737d8bf18d1b46426997e969"} Jan 23 06:50:46 crc kubenswrapper[4937]: I0123 06:50:46.220471 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 06:50:46 crc kubenswrapper[4937]: I0123 06:50:46.221483 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 06:50:46 crc kubenswrapper[4937]: I0123 06:50:46.226924 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-c2fkk" Jan 23 06:50:46 crc kubenswrapper[4937]: I0123 06:50:46.231740 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 06:50:46 crc kubenswrapper[4937]: I0123 06:50:46.306546 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g87q\" (UniqueName: \"kubernetes.io/projected/75b79f91-7f35-4e37-9fd8-2ada0ad723df-kube-api-access-6g87q\") pod \"kube-state-metrics-0\" (UID: \"75b79f91-7f35-4e37-9fd8-2ada0ad723df\") " pod="openstack/kube-state-metrics-0" Jan 23 06:50:46 crc kubenswrapper[4937]: I0123 06:50:46.408915 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g87q\" (UniqueName: \"kubernetes.io/projected/75b79f91-7f35-4e37-9fd8-2ada0ad723df-kube-api-access-6g87q\") pod \"kube-state-metrics-0\" (UID: \"75b79f91-7f35-4e37-9fd8-2ada0ad723df\") " pod="openstack/kube-state-metrics-0" Jan 23 06:50:46 crc kubenswrapper[4937]: I0123 06:50:46.449842 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g87q\" (UniqueName: \"kubernetes.io/projected/75b79f91-7f35-4e37-9fd8-2ada0ad723df-kube-api-access-6g87q\") pod \"kube-state-metrics-0\" (UID: \"75b79f91-7f35-4e37-9fd8-2ada0ad723df\") " pod="openstack/kube-state-metrics-0" Jan 23 06:50:46 crc kubenswrapper[4937]: I0123 06:50:46.598797 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.342629 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 06:50:47 crc kubenswrapper[4937]: W0123 06:50:47.358967 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75b79f91_7f35_4e37_9fd8_2ada0ad723df.slice/crio-a830b1ba474c5f44c7b103ea82448fb0667d9c9dcbe6fa78629342bea96704f0 WatchSource:0}: Error finding container a830b1ba474c5f44c7b103ea82448fb0667d9c9dcbe6fa78629342bea96704f0: Status 404 returned error can't find the container with id a830b1ba474c5f44c7b103ea82448fb0667d9c9dcbe6fa78629342bea96704f0 Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.551134 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.553911 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.557753 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.557785 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.557872 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.557892 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.557968 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.558131 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-zrrzk" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.558268 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.565638 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.570741 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.719363 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vrhg\" (UniqueName: \"kubernetes.io/projected/32a4f804-e737-4bf8-b092-b20127604273-kube-api-access-9vrhg\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.719787 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/32a4f804-e737-4bf8-b092-b20127604273-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.720210 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/32a4f804-e737-4bf8-b092-b20127604273-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.720243 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.720279 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/32a4f804-e737-4bf8-b092-b20127604273-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.720300 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/32a4f804-e737-4bf8-b092-b20127604273-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.720324 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/32a4f804-e737-4bf8-b092-b20127604273-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.720345 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/32a4f804-e737-4bf8-b092-b20127604273-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.720375 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/32a4f804-e737-4bf8-b092-b20127604273-config\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.720421 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/32a4f804-e737-4bf8-b092-b20127604273-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.824521 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/32a4f804-e737-4bf8-b092-b20127604273-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.824606 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.824645 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/32a4f804-e737-4bf8-b092-b20127604273-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.824675 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/32a4f804-e737-4bf8-b092-b20127604273-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.824708 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/32a4f804-e737-4bf8-b092-b20127604273-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.824743 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/32a4f804-e737-4bf8-b092-b20127604273-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.824773 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/32a4f804-e737-4bf8-b092-b20127604273-config\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.824820 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/32a4f804-e737-4bf8-b092-b20127604273-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.824861 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vrhg\" (UniqueName: \"kubernetes.io/projected/32a4f804-e737-4bf8-b092-b20127604273-kube-api-access-9vrhg\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.824903 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/32a4f804-e737-4bf8-b092-b20127604273-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.826386 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/32a4f804-e737-4bf8-b092-b20127604273-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.826892 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/32a4f804-e737-4bf8-b092-b20127604273-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.827746 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/32a4f804-e737-4bf8-b092-b20127604273-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.833543 4937 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.833799 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ad88d819a9190e54c498f5e1a4ce0a9fbf70213240e060c76a145dc64e923a6c/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.842724 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/32a4f804-e737-4bf8-b092-b20127604273-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.845835 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/32a4f804-e737-4bf8-b092-b20127604273-config\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.846510 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/32a4f804-e737-4bf8-b092-b20127604273-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.846992 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/32a4f804-e737-4bf8-b092-b20127604273-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.856558 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/32a4f804-e737-4bf8-b092-b20127604273-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:47 crc kubenswrapper[4937]: I0123 06:50:47.866356 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vrhg\" (UniqueName: \"kubernetes.io/projected/32a4f804-e737-4bf8-b092-b20127604273-kube-api-access-9vrhg\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:48 crc kubenswrapper[4937]: I0123 06:50:48.062565 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\") pod \"prometheus-metric-storage-0\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:48 crc kubenswrapper[4937]: I0123 06:50:48.191837 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 06:50:48 crc kubenswrapper[4937]: I0123 06:50:48.345753 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"75b79f91-7f35-4e37-9fd8-2ada0ad723df","Type":"ContainerStarted","Data":"a830b1ba474c5f44c7b103ea82448fb0667d9c9dcbe6fa78629342bea96704f0"} Jan 23 06:50:48 crc kubenswrapper[4937]: I0123 06:50:48.567579 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 06:50:48 crc kubenswrapper[4937]: W0123 06:50:48.601113 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32a4f804_e737_4bf8_b092_b20127604273.slice/crio-787975cede018e1aec12442889b9218c8dd106e75aeb16de18eecd2421b651c0 WatchSource:0}: Error finding container 787975cede018e1aec12442889b9218c8dd106e75aeb16de18eecd2421b651c0: Status 404 returned error can't find the container with id 787975cede018e1aec12442889b9218c8dd106e75aeb16de18eecd2421b651c0 Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.358316 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"32a4f804-e737-4bf8-b092-b20127604273","Type":"ContainerStarted","Data":"787975cede018e1aec12442889b9218c8dd106e75aeb16de18eecd2421b651c0"} Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.835847 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-fb8bs"] Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.836861 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.839032 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.840662 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.840840 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-tg9bt" Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.849573 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fb8bs"] Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.859479 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-94t5t"] Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.861469 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.908398 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f7c0166f-0553-4d86-bf1f-19bdcfaea146-var-log-ovn\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.908436 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0a871e3b-e711-4a88-9a1a-e9948d1ba9b9-etc-ovs\") pod \"ovn-controller-ovs-94t5t\" (UID: \"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9\") " pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.908470 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0a871e3b-e711-4a88-9a1a-e9948d1ba9b9-var-lib\") pod \"ovn-controller-ovs-94t5t\" (UID: \"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9\") " pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.908509 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8nzd\" (UniqueName: \"kubernetes.io/projected/f7c0166f-0553-4d86-bf1f-19bdcfaea146-kube-api-access-z8nzd\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.908563 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0a871e3b-e711-4a88-9a1a-e9948d1ba9b9-var-run\") pod \"ovn-controller-ovs-94t5t\" (UID: \"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9\") " pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.908611 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7c0166f-0553-4d86-bf1f-19bdcfaea146-scripts\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.908671 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0a871e3b-e711-4a88-9a1a-e9948d1ba9b9-var-log\") pod \"ovn-controller-ovs-94t5t\" (UID: \"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9\") " pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.908704 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f7c0166f-0553-4d86-bf1f-19bdcfaea146-var-run-ovn\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.908732 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f7c0166f-0553-4d86-bf1f-19bdcfaea146-var-run\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.908796 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a871e3b-e711-4a88-9a1a-e9948d1ba9b9-scripts\") pod \"ovn-controller-ovs-94t5t\" (UID: \"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9\") " pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.909042 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7c0166f-0553-4d86-bf1f-19bdcfaea146-ovn-controller-tls-certs\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.909070 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c0166f-0553-4d86-bf1f-19bdcfaea146-combined-ca-bundle\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.909103 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58hjk\" (UniqueName: \"kubernetes.io/projected/0a871e3b-e711-4a88-9a1a-e9948d1ba9b9-kube-api-access-58hjk\") pod \"ovn-controller-ovs-94t5t\" (UID: \"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9\") " pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:49 crc kubenswrapper[4937]: I0123 06:50:49.922848 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-94t5t"] Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.010217 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0a871e3b-e711-4a88-9a1a-e9948d1ba9b9-var-log\") pod \"ovn-controller-ovs-94t5t\" (UID: \"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9\") " pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.010279 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f7c0166f-0553-4d86-bf1f-19bdcfaea146-var-run-ovn\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.010306 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f7c0166f-0553-4d86-bf1f-19bdcfaea146-var-run\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.010331 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a871e3b-e711-4a88-9a1a-e9948d1ba9b9-scripts\") pod \"ovn-controller-ovs-94t5t\" (UID: \"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9\") " pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.010360 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7c0166f-0553-4d86-bf1f-19bdcfaea146-ovn-controller-tls-certs\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.010377 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c0166f-0553-4d86-bf1f-19bdcfaea146-combined-ca-bundle\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.010397 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58hjk\" (UniqueName: \"kubernetes.io/projected/0a871e3b-e711-4a88-9a1a-e9948d1ba9b9-kube-api-access-58hjk\") pod \"ovn-controller-ovs-94t5t\" (UID: \"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9\") " pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.010425 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f7c0166f-0553-4d86-bf1f-19bdcfaea146-var-log-ovn\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.010441 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0a871e3b-e711-4a88-9a1a-e9948d1ba9b9-etc-ovs\") pod \"ovn-controller-ovs-94t5t\" (UID: \"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9\") " pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.010458 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0a871e3b-e711-4a88-9a1a-e9948d1ba9b9-var-lib\") pod \"ovn-controller-ovs-94t5t\" (UID: \"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9\") " pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.010477 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8nzd\" (UniqueName: \"kubernetes.io/projected/f7c0166f-0553-4d86-bf1f-19bdcfaea146-kube-api-access-z8nzd\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.010523 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0a871e3b-e711-4a88-9a1a-e9948d1ba9b9-var-run\") pod \"ovn-controller-ovs-94t5t\" (UID: \"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9\") " pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.010756 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7c0166f-0553-4d86-bf1f-19bdcfaea146-scripts\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.011480 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0a871e3b-e711-4a88-9a1a-e9948d1ba9b9-var-log\") pod \"ovn-controller-ovs-94t5t\" (UID: \"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9\") " pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.011713 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0a871e3b-e711-4a88-9a1a-e9948d1ba9b9-etc-ovs\") pod \"ovn-controller-ovs-94t5t\" (UID: \"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9\") " pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.012127 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f7c0166f-0553-4d86-bf1f-19bdcfaea146-var-log-ovn\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.012453 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0a871e3b-e711-4a88-9a1a-e9948d1ba9b9-var-lib\") pod \"ovn-controller-ovs-94t5t\" (UID: \"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9\") " pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.012650 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0a871e3b-e711-4a88-9a1a-e9948d1ba9b9-var-run\") pod \"ovn-controller-ovs-94t5t\" (UID: \"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9\") " pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.012723 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f7c0166f-0553-4d86-bf1f-19bdcfaea146-var-run\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.012816 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f7c0166f-0553-4d86-bf1f-19bdcfaea146-var-run-ovn\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.014207 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f7c0166f-0553-4d86-bf1f-19bdcfaea146-scripts\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.015198 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a871e3b-e711-4a88-9a1a-e9948d1ba9b9-scripts\") pod \"ovn-controller-ovs-94t5t\" (UID: \"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9\") " pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.020804 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7c0166f-0553-4d86-bf1f-19bdcfaea146-ovn-controller-tls-certs\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.020953 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c0166f-0553-4d86-bf1f-19bdcfaea146-combined-ca-bundle\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.071413 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58hjk\" (UniqueName: \"kubernetes.io/projected/0a871e3b-e711-4a88-9a1a-e9948d1ba9b9-kube-api-access-58hjk\") pod \"ovn-controller-ovs-94t5t\" (UID: \"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9\") " pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.072521 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8nzd\" (UniqueName: \"kubernetes.io/projected/f7c0166f-0553-4d86-bf1f-19bdcfaea146-kube-api-access-z8nzd\") pod \"ovn-controller-fb8bs\" (UID: \"f7c0166f-0553-4d86-bf1f-19bdcfaea146\") " pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.189471 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fb8bs" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.189618 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.738925 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.740467 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.748052 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.748399 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.750266 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.750327 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.751078 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-qfn88" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.786201 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.941948 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fa3b925-02fe-4fbf-a441-98aaf94ed191-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.942045 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4fa3b925-02fe-4fbf-a441-98aaf94ed191-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.942098 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa3b925-02fe-4fbf-a441-98aaf94ed191-config\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.942123 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa3b925-02fe-4fbf-a441-98aaf94ed191-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.942175 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4fa3b925-02fe-4fbf-a441-98aaf94ed191-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.942196 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.942244 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqgz4\" (UniqueName: \"kubernetes.io/projected/4fa3b925-02fe-4fbf-a441-98aaf94ed191-kube-api-access-nqgz4\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:50 crc kubenswrapper[4937]: I0123 06:50:50.942265 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fa3b925-02fe-4fbf-a441-98aaf94ed191-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.036271 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fb8bs"] Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.044250 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fa3b925-02fe-4fbf-a441-98aaf94ed191-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.044337 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4fa3b925-02fe-4fbf-a441-98aaf94ed191-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.044372 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa3b925-02fe-4fbf-a441-98aaf94ed191-config\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.044400 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa3b925-02fe-4fbf-a441-98aaf94ed191-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.044446 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4fa3b925-02fe-4fbf-a441-98aaf94ed191-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.044476 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.044522 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqgz4\" (UniqueName: \"kubernetes.io/projected/4fa3b925-02fe-4fbf-a441-98aaf94ed191-kube-api-access-nqgz4\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.044548 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fa3b925-02fe-4fbf-a441-98aaf94ed191-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.046736 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.046738 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4fa3b925-02fe-4fbf-a441-98aaf94ed191-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.048047 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa3b925-02fe-4fbf-a441-98aaf94ed191-config\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.049203 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4fa3b925-02fe-4fbf-a441-98aaf94ed191-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.067559 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fa3b925-02fe-4fbf-a441-98aaf94ed191-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.067757 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa3b925-02fe-4fbf-a441-98aaf94ed191-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.073617 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fa3b925-02fe-4fbf-a441-98aaf94ed191-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.081668 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqgz4\" (UniqueName: \"kubernetes.io/projected/4fa3b925-02fe-4fbf-a441-98aaf94ed191-kube-api-access-nqgz4\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.088009 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4fa3b925-02fe-4fbf-a441-98aaf94ed191\") " pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.103952 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 06:50:51 crc kubenswrapper[4937]: I0123 06:50:51.372843 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-94t5t"] Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.607171 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-x9lg6"] Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.608437 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.612368 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.626141 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-x9lg6"] Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.742496 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p785\" (UniqueName: \"kubernetes.io/projected/3407cde4-142f-499c-95e0-22eb2c91ea92-kube-api-access-6p785\") pod \"ovn-controller-metrics-x9lg6\" (UID: \"3407cde4-142f-499c-95e0-22eb2c91ea92\") " pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.742619 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3407cde4-142f-499c-95e0-22eb2c91ea92-ovn-rundir\") pod \"ovn-controller-metrics-x9lg6\" (UID: \"3407cde4-142f-499c-95e0-22eb2c91ea92\") " pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.742789 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3407cde4-142f-499c-95e0-22eb2c91ea92-combined-ca-bundle\") pod \"ovn-controller-metrics-x9lg6\" (UID: \"3407cde4-142f-499c-95e0-22eb2c91ea92\") " pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.742827 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3407cde4-142f-499c-95e0-22eb2c91ea92-config\") pod \"ovn-controller-metrics-x9lg6\" (UID: \"3407cde4-142f-499c-95e0-22eb2c91ea92\") " pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.742886 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3407cde4-142f-499c-95e0-22eb2c91ea92-ovs-rundir\") pod \"ovn-controller-metrics-x9lg6\" (UID: \"3407cde4-142f-499c-95e0-22eb2c91ea92\") " pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.742905 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3407cde4-142f-499c-95e0-22eb2c91ea92-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-x9lg6\" (UID: \"3407cde4-142f-499c-95e0-22eb2c91ea92\") " pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.853822 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3407cde4-142f-499c-95e0-22eb2c91ea92-combined-ca-bundle\") pod \"ovn-controller-metrics-x9lg6\" (UID: \"3407cde4-142f-499c-95e0-22eb2c91ea92\") " pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.853928 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3407cde4-142f-499c-95e0-22eb2c91ea92-config\") pod \"ovn-controller-metrics-x9lg6\" (UID: \"3407cde4-142f-499c-95e0-22eb2c91ea92\") " pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.854122 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3407cde4-142f-499c-95e0-22eb2c91ea92-ovs-rundir\") pod \"ovn-controller-metrics-x9lg6\" (UID: \"3407cde4-142f-499c-95e0-22eb2c91ea92\") " pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.854407 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3407cde4-142f-499c-95e0-22eb2c91ea92-ovs-rundir\") pod \"ovn-controller-metrics-x9lg6\" (UID: \"3407cde4-142f-499c-95e0-22eb2c91ea92\") " pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.855003 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3407cde4-142f-499c-95e0-22eb2c91ea92-config\") pod \"ovn-controller-metrics-x9lg6\" (UID: \"3407cde4-142f-499c-95e0-22eb2c91ea92\") " pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.855070 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3407cde4-142f-499c-95e0-22eb2c91ea92-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-x9lg6\" (UID: \"3407cde4-142f-499c-95e0-22eb2c91ea92\") " pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.855304 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p785\" (UniqueName: \"kubernetes.io/projected/3407cde4-142f-499c-95e0-22eb2c91ea92-kube-api-access-6p785\") pod \"ovn-controller-metrics-x9lg6\" (UID: \"3407cde4-142f-499c-95e0-22eb2c91ea92\") " pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.855382 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3407cde4-142f-499c-95e0-22eb2c91ea92-ovn-rundir\") pod \"ovn-controller-metrics-x9lg6\" (UID: \"3407cde4-142f-499c-95e0-22eb2c91ea92\") " pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.855800 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3407cde4-142f-499c-95e0-22eb2c91ea92-ovn-rundir\") pod \"ovn-controller-metrics-x9lg6\" (UID: \"3407cde4-142f-499c-95e0-22eb2c91ea92\") " pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.861425 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3407cde4-142f-499c-95e0-22eb2c91ea92-combined-ca-bundle\") pod \"ovn-controller-metrics-x9lg6\" (UID: \"3407cde4-142f-499c-95e0-22eb2c91ea92\") " pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.864740 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3407cde4-142f-499c-95e0-22eb2c91ea92-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-x9lg6\" (UID: \"3407cde4-142f-499c-95e0-22eb2c91ea92\") " pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.872643 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p785\" (UniqueName: \"kubernetes.io/projected/3407cde4-142f-499c-95e0-22eb2c91ea92-kube-api-access-6p785\") pod \"ovn-controller-metrics-x9lg6\" (UID: \"3407cde4-142f-499c-95e0-22eb2c91ea92\") " pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:52 crc kubenswrapper[4937]: I0123 06:50:52.995199 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-x9lg6" Jan 23 06:50:53 crc kubenswrapper[4937]: I0123 06:50:53.803039 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 06:50:53 crc kubenswrapper[4937]: I0123 06:50:53.807348 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:53 crc kubenswrapper[4937]: I0123 06:50:53.825315 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-bs8lx" Jan 23 06:50:53 crc kubenswrapper[4937]: I0123 06:50:53.826187 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 23 06:50:53 crc kubenswrapper[4937]: I0123 06:50:53.826348 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 23 06:50:53 crc kubenswrapper[4937]: I0123 06:50:53.835663 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 23 06:50:53 crc kubenswrapper[4937]: I0123 06:50:53.853192 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 06:50:53 crc kubenswrapper[4937]: I0123 06:50:53.939409 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:53 crc kubenswrapper[4937]: I0123 06:50:53.939461 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5e690b-35ba-4305-b976-ede5fab8e117-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:53 crc kubenswrapper[4937]: I0123 06:50:53.939495 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5e690b-35ba-4305-b976-ede5fab8e117-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:53 crc kubenswrapper[4937]: I0123 06:50:53.939534 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d5e690b-35ba-4305-b976-ede5fab8e117-config\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:53 crc kubenswrapper[4937]: I0123 06:50:53.939548 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5e690b-35ba-4305-b976-ede5fab8e117-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:53 crc kubenswrapper[4937]: I0123 06:50:53.939622 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1d5e690b-35ba-4305-b976-ede5fab8e117-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:53 crc kubenswrapper[4937]: I0123 06:50:53.939646 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c8ht\" (UniqueName: \"kubernetes.io/projected/1d5e690b-35ba-4305-b976-ede5fab8e117-kube-api-access-9c8ht\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:53 crc kubenswrapper[4937]: I0123 06:50:53.939668 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d5e690b-35ba-4305-b976-ede5fab8e117-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:54 crc kubenswrapper[4937]: I0123 06:50:54.041045 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d5e690b-35ba-4305-b976-ede5fab8e117-config\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:54 crc kubenswrapper[4937]: I0123 06:50:54.041118 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5e690b-35ba-4305-b976-ede5fab8e117-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:54 crc kubenswrapper[4937]: I0123 06:50:54.041255 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1d5e690b-35ba-4305-b976-ede5fab8e117-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:54 crc kubenswrapper[4937]: I0123 06:50:54.041293 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c8ht\" (UniqueName: \"kubernetes.io/projected/1d5e690b-35ba-4305-b976-ede5fab8e117-kube-api-access-9c8ht\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:54 crc kubenswrapper[4937]: I0123 06:50:54.041351 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d5e690b-35ba-4305-b976-ede5fab8e117-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:54 crc kubenswrapper[4937]: I0123 06:50:54.041380 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:54 crc kubenswrapper[4937]: I0123 06:50:54.041431 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5e690b-35ba-4305-b976-ede5fab8e117-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:54 crc kubenswrapper[4937]: I0123 06:50:54.041495 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5e690b-35ba-4305-b976-ede5fab8e117-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:54 crc kubenswrapper[4937]: I0123 06:50:54.041880 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1d5e690b-35ba-4305-b976-ede5fab8e117-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:54 crc kubenswrapper[4937]: I0123 06:50:54.041962 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d5e690b-35ba-4305-b976-ede5fab8e117-config\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:54 crc kubenswrapper[4937]: I0123 06:50:54.042067 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:54 crc kubenswrapper[4937]: I0123 06:50:54.043630 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d5e690b-35ba-4305-b976-ede5fab8e117-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:54 crc kubenswrapper[4937]: I0123 06:50:54.047754 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5e690b-35ba-4305-b976-ede5fab8e117-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:54 crc kubenswrapper[4937]: I0123 06:50:54.052899 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5e690b-35ba-4305-b976-ede5fab8e117-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:54 crc kubenswrapper[4937]: I0123 06:50:54.063619 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c8ht\" (UniqueName: \"kubernetes.io/projected/1d5e690b-35ba-4305-b976-ede5fab8e117-kube-api-access-9c8ht\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:54 crc kubenswrapper[4937]: I0123 06:50:54.066507 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5e690b-35ba-4305-b976-ede5fab8e117-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:54 crc kubenswrapper[4937]: I0123 06:50:54.067869 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1d5e690b-35ba-4305-b976-ede5fab8e117\") " pod="openstack/ovsdbserver-nb-0" Jan 23 06:50:54 crc kubenswrapper[4937]: I0123 06:50:54.197952 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 06:51:07 crc kubenswrapper[4937]: I0123 06:51:07.724427 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:51:07 crc kubenswrapper[4937]: I0123 06:51:07.725133 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:51:07 crc kubenswrapper[4937]: I0123 06:51:07.725189 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:51:07 crc kubenswrapper[4937]: I0123 06:51:07.725960 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"43224d24fa6475b6c1d236391f3764ff43f9af8ed79931ba5cc6fab271c5f9c7"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 06:51:07 crc kubenswrapper[4937]: I0123 06:51:07.726031 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://43224d24fa6475b6c1d236391f3764ff43f9af8ed79931ba5cc6fab271c5f9c7" gracePeriod=600 Jan 23 06:51:11 crc kubenswrapper[4937]: I0123 06:51:11.717922 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="43224d24fa6475b6c1d236391f3764ff43f9af8ed79931ba5cc6fab271c5f9c7" exitCode=0 Jan 23 06:51:11 crc kubenswrapper[4937]: I0123 06:51:11.718069 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"43224d24fa6475b6c1d236391f3764ff43f9af8ed79931ba5cc6fab271c5f9c7"} Jan 23 06:51:11 crc kubenswrapper[4937]: I0123 06:51:11.718345 4937 scope.go:117] "RemoveContainer" containerID="998b72db21c32d780c7acaf4315ccfdf0bdabc15b6ed9c145723fd20240726ab" Jan 23 06:51:23 crc kubenswrapper[4937]: W0123 06:51:23.547239 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a871e3b_e711_4a88_9a1a_e9948d1ba9b9.slice/crio-29dbd2e79e2f7ffb850bee3782a04fad43d3d40135fc744cff5a919eea08dd84 WatchSource:0}: Error finding container 29dbd2e79e2f7ffb850bee3782a04fad43d3d40135fc744cff5a919eea08dd84: Status 404 returned error can't find the container with id 29dbd2e79e2f7ffb850bee3782a04fad43d3d40135fc744cff5a919eea08dd84 Jan 23 06:51:23 crc kubenswrapper[4937]: I0123 06:51:23.826434 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fb8bs" event={"ID":"f7c0166f-0553-4d86-bf1f-19bdcfaea146","Type":"ContainerStarted","Data":"05e80dc4e1f4606e082d6869130096f664d939394388dc1c2101d00ffd49c12e"} Jan 23 06:51:23 crc kubenswrapper[4937]: I0123 06:51:23.827991 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-94t5t" event={"ID":"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9","Type":"ContainerStarted","Data":"29dbd2e79e2f7ffb850bee3782a04fad43d3d40135fc744cff5a919eea08dd84"} Jan 23 06:51:43 crc kubenswrapper[4937]: E0123 06:51:43.838659 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 06:51:43 crc kubenswrapper[4937]: E0123 06:51:43.839502 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 06:51:43 crc kubenswrapper[4937]: E0123 06:51:43.839875 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.44:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6wvr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-55fc5545f-c4rvb_openstack(a99c63c6-94a6-4a26-ab33-e492d6644b25): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:51:43 crc kubenswrapper[4937]: E0123 06:51:43.842045 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-55fc5545f-c4rvb" podUID="a99c63c6-94a6-4a26-ab33-e492d6644b25" Jan 23 06:51:45 crc kubenswrapper[4937]: E0123 06:51:45.604962 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 23 06:51:45 crc kubenswrapper[4937]: E0123 06:51:45.605423 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 23 06:51:45 crc kubenswrapper[4937]: E0123 06:51:45.605640 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.44:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vkpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-notifications-server-0_openstack(4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:51:45 crc kubenswrapper[4937]: E0123 06:51:45.606874 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-notifications-server-0" podUID="4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e" Jan 23 06:51:45 crc kubenswrapper[4937]: E0123 06:51:45.700123 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 06:51:45 crc kubenswrapper[4937]: E0123 06:51:45.700181 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 06:51:45 crc kubenswrapper[4937]: E0123 06:51:45.700307 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.44:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2d2xc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7ff6ccbbbc-s2msr_openstack(605c07a2-d595-45ed-924f-6ff0d6cd1eb0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:51:45 crc kubenswrapper[4937]: E0123 06:51:45.701675 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7ff6ccbbbc-s2msr" podUID="605c07a2-d595-45ed-924f-6ff0d6cd1eb0" Jan 23 06:51:46 crc kubenswrapper[4937]: E0123 06:51:46.030750 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.44:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-notifications-server-0" podUID="4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e" Jan 23 06:51:47 crc kubenswrapper[4937]: E0123 06:51:47.985697 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Jan 23 06:51:47 crc kubenswrapper[4937]: E0123 06:51:47.986178 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Jan 23 06:51:47 crc kubenswrapper[4937]: E0123 06:51:47.986326 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.44:5001/podified-master-centos10/openstack-mariadb:watcher_latest,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42vkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(cdd3a96c-6f65-4f39-b435-78f7ceed08b5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:51:47 crc kubenswrapper[4937]: E0123 06:51:47.987572 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="cdd3a96c-6f65-4f39-b435-78f7ceed08b5" Jan 23 06:51:48 crc kubenswrapper[4937]: E0123 06:51:48.042394 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.44:5001/podified-master-centos10/openstack-mariadb:watcher_latest\\\"\"" pod="openstack/openstack-galera-0" podUID="cdd3a96c-6f65-4f39-b435-78f7ceed08b5" Jan 23 06:51:49 crc kubenswrapper[4937]: E0123 06:51:49.613267 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 06:51:49 crc kubenswrapper[4937]: E0123 06:51:49.614012 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 06:51:49 crc kubenswrapper[4937]: E0123 06:51:49.614231 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.44:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gw2qk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-68cdf6699c-257b4_openstack(de9f1ace-5aaf-42ed-b6ad-906825c2fb34): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:51:49 crc kubenswrapper[4937]: E0123 06:51:49.615443 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-68cdf6699c-257b4" podUID="de9f1ace-5aaf-42ed-b6ad-906825c2fb34" Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.672089 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fc5545f-c4rvb" Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.676099 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff6ccbbbc-s2msr" Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.744353 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6wvr\" (UniqueName: \"kubernetes.io/projected/a99c63c6-94a6-4a26-ab33-e492d6644b25-kube-api-access-g6wvr\") pod \"a99c63c6-94a6-4a26-ab33-e492d6644b25\" (UID: \"a99c63c6-94a6-4a26-ab33-e492d6644b25\") " Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.744439 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/605c07a2-d595-45ed-924f-6ff0d6cd1eb0-config\") pod \"605c07a2-d595-45ed-924f-6ff0d6cd1eb0\" (UID: \"605c07a2-d595-45ed-924f-6ff0d6cd1eb0\") " Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.744468 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a99c63c6-94a6-4a26-ab33-e492d6644b25-dns-svc\") pod \"a99c63c6-94a6-4a26-ab33-e492d6644b25\" (UID: \"a99c63c6-94a6-4a26-ab33-e492d6644b25\") " Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.744571 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/605c07a2-d595-45ed-924f-6ff0d6cd1eb0-dns-svc\") pod \"605c07a2-d595-45ed-924f-6ff0d6cd1eb0\" (UID: \"605c07a2-d595-45ed-924f-6ff0d6cd1eb0\") " Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.744656 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a99c63c6-94a6-4a26-ab33-e492d6644b25-config\") pod \"a99c63c6-94a6-4a26-ab33-e492d6644b25\" (UID: \"a99c63c6-94a6-4a26-ab33-e492d6644b25\") " Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.744674 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d2xc\" (UniqueName: \"kubernetes.io/projected/605c07a2-d595-45ed-924f-6ff0d6cd1eb0-kube-api-access-2d2xc\") pod \"605c07a2-d595-45ed-924f-6ff0d6cd1eb0\" (UID: \"605c07a2-d595-45ed-924f-6ff0d6cd1eb0\") " Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.745057 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/605c07a2-d595-45ed-924f-6ff0d6cd1eb0-config" (OuterVolumeSpecName: "config") pod "605c07a2-d595-45ed-924f-6ff0d6cd1eb0" (UID: "605c07a2-d595-45ed-924f-6ff0d6cd1eb0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.745080 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a99c63c6-94a6-4a26-ab33-e492d6644b25-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a99c63c6-94a6-4a26-ab33-e492d6644b25" (UID: "a99c63c6-94a6-4a26-ab33-e492d6644b25"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.745200 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/605c07a2-d595-45ed-924f-6ff0d6cd1eb0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "605c07a2-d595-45ed-924f-6ff0d6cd1eb0" (UID: "605c07a2-d595-45ed-924f-6ff0d6cd1eb0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.745461 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a99c63c6-94a6-4a26-ab33-e492d6644b25-config" (OuterVolumeSpecName: "config") pod "a99c63c6-94a6-4a26-ab33-e492d6644b25" (UID: "a99c63c6-94a6-4a26-ab33-e492d6644b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.750863 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a99c63c6-94a6-4a26-ab33-e492d6644b25-kube-api-access-g6wvr" (OuterVolumeSpecName: "kube-api-access-g6wvr") pod "a99c63c6-94a6-4a26-ab33-e492d6644b25" (UID: "a99c63c6-94a6-4a26-ab33-e492d6644b25"). InnerVolumeSpecName "kube-api-access-g6wvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.750938 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/605c07a2-d595-45ed-924f-6ff0d6cd1eb0-kube-api-access-2d2xc" (OuterVolumeSpecName: "kube-api-access-2d2xc") pod "605c07a2-d595-45ed-924f-6ff0d6cd1eb0" (UID: "605c07a2-d595-45ed-924f-6ff0d6cd1eb0"). InnerVolumeSpecName "kube-api-access-2d2xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.847494 4937 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a99c63c6-94a6-4a26-ab33-e492d6644b25-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.847542 4937 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/605c07a2-d595-45ed-924f-6ff0d6cd1eb0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.847557 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a99c63c6-94a6-4a26-ab33-e492d6644b25-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.847571 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d2xc\" (UniqueName: \"kubernetes.io/projected/605c07a2-d595-45ed-924f-6ff0d6cd1eb0-kube-api-access-2d2xc\") on node \"crc\" DevicePath \"\"" Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.847586 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6wvr\" (UniqueName: \"kubernetes.io/projected/a99c63c6-94a6-4a26-ab33-e492d6644b25-kube-api-access-g6wvr\") on node \"crc\" DevicePath \"\"" Jan 23 06:51:49 crc kubenswrapper[4937]: I0123 06:51:49.847617 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/605c07a2-d595-45ed-924f-6ff0d6cd1eb0-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:51:49 crc kubenswrapper[4937]: E0123 06:51:49.928050 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Jan 23 06:51:49 crc kubenswrapper[4937]: E0123 06:51:49.928145 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Jan 23 06:51:49 crc kubenswrapper[4937]: E0123 06:51:49.928326 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.44:5001/podified-master-centos10/openstack-mariadb:watcher_latest,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zxvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(f256fcd3-0094-4316-acac-5cc6424f12d0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:51:49 crc kubenswrapper[4937]: E0123 06:51:49.929554 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="f256fcd3-0094-4316-acac-5cc6424f12d0" Jan 23 06:51:50 crc kubenswrapper[4937]: I0123 06:51:50.054209 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff6ccbbbc-s2msr" Jan 23 06:51:50 crc kubenswrapper[4937]: I0123 06:51:50.054209 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff6ccbbbc-s2msr" event={"ID":"605c07a2-d595-45ed-924f-6ff0d6cd1eb0","Type":"ContainerDied","Data":"cb73c5911ee9b7f71430eeb7f22ad5b457bd7815c1b20508c95fdff314de88e9"} Jan 23 06:51:50 crc kubenswrapper[4937]: I0123 06:51:50.055307 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fc5545f-c4rvb" event={"ID":"a99c63c6-94a6-4a26-ab33-e492d6644b25","Type":"ContainerDied","Data":"717567e9398832d1967e061a4928c871ad4ca4289d87f6df47175e50c49a6594"} Jan 23 06:51:50 crc kubenswrapper[4937]: I0123 06:51:50.055361 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fc5545f-c4rvb" Jan 23 06:51:50 crc kubenswrapper[4937]: E0123 06:51:50.056472 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.44:5001/podified-master-centos10/openstack-mariadb:watcher_latest\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="f256fcd3-0094-4316-acac-5cc6424f12d0" Jan 23 06:51:50 crc kubenswrapper[4937]: I0123 06:51:50.206726 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff6ccbbbc-s2msr"] Jan 23 06:51:50 crc kubenswrapper[4937]: I0123 06:51:50.213320 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ff6ccbbbc-s2msr"] Jan 23 06:51:50 crc kubenswrapper[4937]: I0123 06:51:50.228366 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fc5545f-c4rvb"] Jan 23 06:51:50 crc kubenswrapper[4937]: I0123 06:51:50.252826 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fc5545f-c4rvb"] Jan 23 06:51:50 crc kubenswrapper[4937]: I0123 06:51:50.538947 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="605c07a2-d595-45ed-924f-6ff0d6cd1eb0" path="/var/lib/kubelet/pods/605c07a2-d595-45ed-924f-6ff0d6cd1eb0/volumes" Jan 23 06:51:50 crc kubenswrapper[4937]: I0123 06:51:50.539871 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a99c63c6-94a6-4a26-ab33-e492d6644b25" path="/var/lib/kubelet/pods/a99c63c6-94a6-4a26-ab33-e492d6644b25/volumes" Jan 23 06:51:50 crc kubenswrapper[4937]: E0123 06:51:50.712751 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 23 06:51:50 crc kubenswrapper[4937]: E0123 06:51:50.712824 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 23 06:51:50 crc kubenswrapper[4937]: E0123 06:51:50.712961 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.44:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fg4dh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(c22daa68-7c34-4180-adcc-d939bfa5a607): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:51:50 crc kubenswrapper[4937]: E0123 06:51:50.714940 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="c22daa68-7c34-4180-adcc-d939bfa5a607" Jan 23 06:51:50 crc kubenswrapper[4937]: E0123 06:51:50.932017 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-memcached:watcher_latest" Jan 23 06:51:50 crc kubenswrapper[4937]: E0123 06:51:50.932087 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-memcached:watcher_latest" Jan 23 06:51:50 crc kubenswrapper[4937]: E0123 06:51:50.932254 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:38.102.83.44:5001/podified-master-centos10/openstack-memcached:watcher_latest,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5dhd9hb7hf6hcdh68chc5h689h659hf8h697h69h568h99h585h7dh597h9dh58fh5cbhf4h54bh64ch94h6chc4h65bhdch5bbhd5h94h85q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p828p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(4665ea5d-7191-40c4-bd96-5c1b48cf97a2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:51:50 crc kubenswrapper[4937]: E0123 06:51:50.933439 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="4665ea5d-7191-40c4-bd96-5c1b48cf97a2" Jan 23 06:51:50 crc kubenswrapper[4937]: E0123 06:51:50.983181 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a" Jan 23 06:51:50 crc kubenswrapper[4937]: E0123 06:51:50.983362 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vrhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(32a4f804-e737-4bf8-b092-b20127604273): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 06:51:50 crc kubenswrapper[4937]: E0123 06:51:50.984551 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="32a4f804-e737-4bf8-b092-b20127604273" Jan 23 06:51:51 crc kubenswrapper[4937]: E0123 06:51:51.063545 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.44:5001/podified-master-centos10/openstack-memcached:watcher_latest\\\"\"" pod="openstack/memcached-0" podUID="4665ea5d-7191-40c4-bd96-5c1b48cf97a2" Jan 23 06:51:51 crc kubenswrapper[4937]: E0123 06:51:51.063648 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.44:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="c22daa68-7c34-4180-adcc-d939bfa5a607" Jan 23 06:51:51 crc kubenswrapper[4937]: E0123 06:51:51.063698 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="32a4f804-e737-4bf8-b092-b20127604273" Jan 23 06:51:51 crc kubenswrapper[4937]: E0123 06:51:51.345338 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 06:51:51 crc kubenswrapper[4937]: E0123 06:51:51.345416 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 06:51:51 crc kubenswrapper[4937]: E0123 06:51:51.345557 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.44:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldwzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-db4cc579f-z4cj4_openstack(02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:51:51 crc kubenswrapper[4937]: E0123 06:51:51.347008 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-db4cc579f-z4cj4" podUID="02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03" Jan 23 06:51:52 crc kubenswrapper[4937]: E0123 06:51:52.069251 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.44:5001/podified-master-centos10/openstack-neutron-server:watcher_latest\\\"\"" pod="openstack/dnsmasq-dns-db4cc579f-z4cj4" podUID="02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03" Jan 23 06:51:52 crc kubenswrapper[4937]: E0123 06:51:52.948285 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 23 06:51:52 crc kubenswrapper[4937]: E0123 06:51:52.948338 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 23 06:51:52 crc kubenswrapper[4937]: E0123 06:51:52.948454 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.44:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hvkj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:51:52 crc kubenswrapper[4937]: E0123 06:51:52.949676 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" Jan 23 06:51:53 crc kubenswrapper[4937]: E0123 06:51:53.077328 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.44:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-server-0" podUID="c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" Jan 23 06:51:53 crc kubenswrapper[4937]: E0123 06:51:53.107055 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 06:51:53 crc kubenswrapper[4937]: E0123 06:51:53.107131 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 06:51:53 crc kubenswrapper[4937]: E0123 06:51:53.107272 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.44:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c7h56dh5cfh8bh54fhbbhf4h5b9hdch67fhd7h55fh55fh6ch9h548h54ch665h647h6h8fhd6h5dfh5cdh58bh577h66fh695h5fbh55h77h5fcq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jr2dj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5fb9cb7945-t6mcj_openstack(ffd88816-be2b-4c27-a5f5-a061d44c8e63): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:51:53 crc kubenswrapper[4937]: E0123 06:51:53.108428 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5fb9cb7945-t6mcj" podUID="ffd88816-be2b-4c27-a5f5-a061d44c8e63" Jan 23 06:51:53 crc kubenswrapper[4937]: I0123 06:51:53.780802 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68cdf6699c-257b4" Jan 23 06:51:53 crc kubenswrapper[4937]: I0123 06:51:53.928394 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw2qk\" (UniqueName: \"kubernetes.io/projected/de9f1ace-5aaf-42ed-b6ad-906825c2fb34-kube-api-access-gw2qk\") pod \"de9f1ace-5aaf-42ed-b6ad-906825c2fb34\" (UID: \"de9f1ace-5aaf-42ed-b6ad-906825c2fb34\") " Jan 23 06:51:53 crc kubenswrapper[4937]: I0123 06:51:53.928457 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de9f1ace-5aaf-42ed-b6ad-906825c2fb34-config\") pod \"de9f1ace-5aaf-42ed-b6ad-906825c2fb34\" (UID: \"de9f1ace-5aaf-42ed-b6ad-906825c2fb34\") " Jan 23 06:51:53 crc kubenswrapper[4937]: I0123 06:51:53.929568 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de9f1ace-5aaf-42ed-b6ad-906825c2fb34-config" (OuterVolumeSpecName: "config") pod "de9f1ace-5aaf-42ed-b6ad-906825c2fb34" (UID: "de9f1ace-5aaf-42ed-b6ad-906825c2fb34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:51:53 crc kubenswrapper[4937]: I0123 06:51:53.941759 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de9f1ace-5aaf-42ed-b6ad-906825c2fb34-kube-api-access-gw2qk" (OuterVolumeSpecName: "kube-api-access-gw2qk") pod "de9f1ace-5aaf-42ed-b6ad-906825c2fb34" (UID: "de9f1ace-5aaf-42ed-b6ad-906825c2fb34"). InnerVolumeSpecName "kube-api-access-gw2qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:51:54 crc kubenswrapper[4937]: I0123 06:51:54.030564 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw2qk\" (UniqueName: \"kubernetes.io/projected/de9f1ace-5aaf-42ed-b6ad-906825c2fb34-kube-api-access-gw2qk\") on node \"crc\" DevicePath \"\"" Jan 23 06:51:54 crc kubenswrapper[4937]: I0123 06:51:54.030614 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de9f1ace-5aaf-42ed-b6ad-906825c2fb34-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:51:54 crc kubenswrapper[4937]: I0123 06:51:54.087001 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"2cafd2f65b886b18799192369211303c6e133b845292356bb63a572ac676cc20"} Jan 23 06:51:54 crc kubenswrapper[4937]: I0123 06:51:54.088223 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68cdf6699c-257b4" event={"ID":"de9f1ace-5aaf-42ed-b6ad-906825c2fb34","Type":"ContainerDied","Data":"c587d520ac6239473c0f6310d9e9f79e550780efb73e72ba6fbec5567400bb1c"} Jan 23 06:51:54 crc kubenswrapper[4937]: I0123 06:51:54.088329 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68cdf6699c-257b4" Jan 23 06:51:54 crc kubenswrapper[4937]: E0123 06:51:54.089532 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.44:5001/podified-master-centos10/openstack-neutron-server:watcher_latest\\\"\"" pod="openstack/dnsmasq-dns-5fb9cb7945-t6mcj" podUID="ffd88816-be2b-4c27-a5f5-a061d44c8e63" Jan 23 06:51:54 crc kubenswrapper[4937]: I0123 06:51:54.165711 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-x9lg6"] Jan 23 06:51:54 crc kubenswrapper[4937]: I0123 06:51:54.211570 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68cdf6699c-257b4"] Jan 23 06:51:54 crc kubenswrapper[4937]: I0123 06:51:54.220559 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68cdf6699c-257b4"] Jan 23 06:51:54 crc kubenswrapper[4937]: I0123 06:51:54.262788 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 06:51:54 crc kubenswrapper[4937]: I0123 06:51:54.537380 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de9f1ace-5aaf-42ed-b6ad-906825c2fb34" path="/var/lib/kubelet/pods/de9f1ace-5aaf-42ed-b6ad-906825c2fb34/volumes" Jan 23 06:51:54 crc kubenswrapper[4937]: I0123 06:51:54.770532 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 06:51:54 crc kubenswrapper[4937]: E0123 06:51:54.775722 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 23 06:51:54 crc kubenswrapper[4937]: E0123 06:51:54.775777 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 23 06:51:54 crc kubenswrapper[4937]: E0123 06:51:54.775914 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6g87q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(75b79f91-7f35-4e37-9fd8-2ada0ad723df): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Jan 23 06:51:54 crc kubenswrapper[4937]: E0123 06:51:54.777103 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="75b79f91-7f35-4e37-9fd8-2ada0ad723df" Jan 23 06:51:54 crc kubenswrapper[4937]: W0123 06:51:54.790950 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d5e690b_35ba_4305_b976_ede5fab8e117.slice/crio-1054435138d67eb8a6cc1e4fde6c0954028fe22345ab3b4cc44908bddb1878fa WatchSource:0}: Error finding container 1054435138d67eb8a6cc1e4fde6c0954028fe22345ab3b4cc44908bddb1878fa: Status 404 returned error can't find the container with id 1054435138d67eb8a6cc1e4fde6c0954028fe22345ab3b4cc44908bddb1878fa Jan 23 06:51:55 crc kubenswrapper[4937]: I0123 06:51:55.097428 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4fa3b925-02fe-4fbf-a441-98aaf94ed191","Type":"ContainerStarted","Data":"541375c10ad6b33364e9f57269e4d484b67fcd6934070e7b27a439689d5eb761"} Jan 23 06:51:55 crc kubenswrapper[4937]: I0123 06:51:55.098945 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1d5e690b-35ba-4305-b976-ede5fab8e117","Type":"ContainerStarted","Data":"1054435138d67eb8a6cc1e4fde6c0954028fe22345ab3b4cc44908bddb1878fa"} Jan 23 06:51:55 crc kubenswrapper[4937]: I0123 06:51:55.101162 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fb8bs" event={"ID":"f7c0166f-0553-4d86-bf1f-19bdcfaea146","Type":"ContainerStarted","Data":"b7d683e6e285bce1afdb9c21f88e849fd0668afe6db6501188c7cabb0903c3df"} Jan 23 06:51:55 crc kubenswrapper[4937]: I0123 06:51:55.101323 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-fb8bs" Jan 23 06:51:55 crc kubenswrapper[4937]: I0123 06:51:55.104492 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-94t5t" event={"ID":"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9","Type":"ContainerStarted","Data":"c47eaebb1382e43a6ed9b001cb61cc42eb6e28de456ef7ef4a62ab0fb074400f"} Jan 23 06:51:55 crc kubenswrapper[4937]: I0123 06:51:55.105741 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-x9lg6" event={"ID":"3407cde4-142f-499c-95e0-22eb2c91ea92","Type":"ContainerStarted","Data":"c92f5db51b1be1fb395b7185e5cf335ed9a7fd9f26cf63e0053679ef9e89a74d"} Jan 23 06:51:55 crc kubenswrapper[4937]: E0123 06:51:55.107755 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="75b79f91-7f35-4e37-9fd8-2ada0ad723df" Jan 23 06:51:55 crc kubenswrapper[4937]: I0123 06:51:55.122426 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-fb8bs" podStartSLOduration=34.963737098 podStartE2EDuration="1m6.122402964s" podCreationTimestamp="2026-01-23 06:50:49 +0000 UTC" firstStartedPulling="2026-01-23 06:51:23.550712505 +0000 UTC m=+1083.354479158" lastFinishedPulling="2026-01-23 06:51:54.709378371 +0000 UTC m=+1114.513145024" observedRunningTime="2026-01-23 06:51:55.118086021 +0000 UTC m=+1114.921852694" watchObservedRunningTime="2026-01-23 06:51:55.122402964 +0000 UTC m=+1114.926169617" Jan 23 06:51:56 crc kubenswrapper[4937]: I0123 06:51:56.114825 4937 generic.go:334] "Generic (PLEG): container finished" podID="0a871e3b-e711-4a88-9a1a-e9948d1ba9b9" containerID="c47eaebb1382e43a6ed9b001cb61cc42eb6e28de456ef7ef4a62ab0fb074400f" exitCode=0 Jan 23 06:51:56 crc kubenswrapper[4937]: I0123 06:51:56.114935 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-94t5t" event={"ID":"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9","Type":"ContainerDied","Data":"c47eaebb1382e43a6ed9b001cb61cc42eb6e28de456ef7ef4a62ab0fb074400f"} Jan 23 06:52:11 crc kubenswrapper[4937]: I0123 06:52:11.226371 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4fa3b925-02fe-4fbf-a441-98aaf94ed191","Type":"ContainerStarted","Data":"0e3d7a7f2e5f7eb5c360e2175d6e3363fa9cc194c63d83391e5becbcb59924fe"} Jan 23 06:52:12 crc kubenswrapper[4937]: I0123 06:52:12.236889 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1d5e690b-35ba-4305-b976-ede5fab8e117","Type":"ContainerStarted","Data":"4b7bcf87f83e8322e74d0d85581a140293cd9a4149a0ce69274119c3c781e453"} Jan 23 06:52:12 crc kubenswrapper[4937]: I0123 06:52:12.239997 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-94t5t" event={"ID":"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9","Type":"ContainerStarted","Data":"4cdd7ba4953696a9e4b3ffbf1dab26953117ecd87b432916cb73e501a6053c75"} Jan 23 06:52:12 crc kubenswrapper[4937]: I0123 06:52:12.240044 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-94t5t" event={"ID":"0a871e3b-e711-4a88-9a1a-e9948d1ba9b9","Type":"ContainerStarted","Data":"4ea969b9c58a5b36e94c220db9ce6fc0e0b5a430b5ef29d60d17be7070d83c65"} Jan 23 06:52:13 crc kubenswrapper[4937]: I0123 06:52:13.248807 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:52:13 crc kubenswrapper[4937]: I0123 06:52:13.276005 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-94t5t" podStartSLOduration=53.351969759 podStartE2EDuration="1m24.275987896s" podCreationTimestamp="2026-01-23 06:50:49 +0000 UTC" firstStartedPulling="2026-01-23 06:51:23.550806928 +0000 UTC m=+1083.354573581" lastFinishedPulling="2026-01-23 06:51:54.474825055 +0000 UTC m=+1114.278591718" observedRunningTime="2026-01-23 06:52:13.267137923 +0000 UTC m=+1133.070904576" watchObservedRunningTime="2026-01-23 06:52:13.275987896 +0000 UTC m=+1133.079754549" Jan 23 06:52:14 crc kubenswrapper[4937]: I0123 06:52:14.257892 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:52:25 crc kubenswrapper[4937]: I0123 06:52:25.253239 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fb8bs" podUID="f7c0166f-0553-4d86-bf1f-19bdcfaea146" containerName="ovn-controller" probeResult="failure" output=< Jan 23 06:52:25 crc kubenswrapper[4937]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 06:52:25 crc kubenswrapper[4937]: > Jan 23 06:52:30 crc kubenswrapper[4937]: I0123 06:52:30.250287 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fb8bs" podUID="f7c0166f-0553-4d86-bf1f-19bdcfaea146" containerName="ovn-controller" probeResult="failure" output=< Jan 23 06:52:30 crc kubenswrapper[4937]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 06:52:30 crc kubenswrapper[4937]: > Jan 23 06:52:35 crc kubenswrapper[4937]: I0123 06:52:35.235516 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fb8bs" podUID="f7c0166f-0553-4d86-bf1f-19bdcfaea146" containerName="ovn-controller" probeResult="failure" output=< Jan 23 06:52:35 crc kubenswrapper[4937]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 06:52:35 crc kubenswrapper[4937]: > Jan 23 06:52:40 crc kubenswrapper[4937]: I0123 06:52:40.225653 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fb8bs" podUID="f7c0166f-0553-4d86-bf1f-19bdcfaea146" containerName="ovn-controller" probeResult="failure" output=< Jan 23 06:52:40 crc kubenswrapper[4937]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 06:52:40 crc kubenswrapper[4937]: > Jan 23 06:52:45 crc kubenswrapper[4937]: I0123 06:52:45.234541 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:52:45 crc kubenswrapper[4937]: I0123 06:52:45.235791 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-94t5t" Jan 23 06:52:45 crc kubenswrapper[4937]: I0123 06:52:45.241018 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fb8bs" podUID="f7c0166f-0553-4d86-bf1f-19bdcfaea146" containerName="ovn-controller" probeResult="failure" output=< Jan 23 06:52:45 crc kubenswrapper[4937]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 06:52:45 crc kubenswrapper[4937]: > Jan 23 06:52:45 crc kubenswrapper[4937]: I0123 06:52:45.682363 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-db4cc579f-z4cj4"] Jan 23 06:52:45 crc kubenswrapper[4937]: I0123 06:52:45.708894 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ff579b49f-7cxx7"] Jan 23 06:52:45 crc kubenswrapper[4937]: I0123 06:52:45.710520 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" Jan 23 06:52:45 crc kubenswrapper[4937]: I0123 06:52:45.712430 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 23 06:52:45 crc kubenswrapper[4937]: I0123 06:52:45.735731 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff579b49f-7cxx7"] Jan 23 06:52:45 crc kubenswrapper[4937]: I0123 06:52:45.892403 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fb9cb7945-t6mcj"] Jan 23 06:52:45 crc kubenswrapper[4937]: I0123 06:52:45.907717 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdsts\" (UniqueName: \"kubernetes.io/projected/80ddcc64-b4cf-4721-9091-bad2da0a933a-kube-api-access-wdsts\") pod \"dnsmasq-dns-6ff579b49f-7cxx7\" (UID: \"80ddcc64-b4cf-4721-9091-bad2da0a933a\") " pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" Jan 23 06:52:45 crc kubenswrapper[4937]: I0123 06:52:45.907809 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80ddcc64-b4cf-4721-9091-bad2da0a933a-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff579b49f-7cxx7\" (UID: \"80ddcc64-b4cf-4721-9091-bad2da0a933a\") " pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" Jan 23 06:52:45 crc kubenswrapper[4937]: I0123 06:52:45.907856 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ddcc64-b4cf-4721-9091-bad2da0a933a-config\") pod \"dnsmasq-dns-6ff579b49f-7cxx7\" (UID: \"80ddcc64-b4cf-4721-9091-bad2da0a933a\") " pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" Jan 23 06:52:45 crc kubenswrapper[4937]: I0123 06:52:45.907909 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80ddcc64-b4cf-4721-9091-bad2da0a933a-dns-svc\") pod \"dnsmasq-dns-6ff579b49f-7cxx7\" (UID: \"80ddcc64-b4cf-4721-9091-bad2da0a933a\") " pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" Jan 23 06:52:45 crc kubenswrapper[4937]: I0123 06:52:45.920611 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bcd8bcc77-cjvkj"] Jan 23 06:52:45 crc kubenswrapper[4937]: I0123 06:52:45.921886 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:52:45 crc kubenswrapper[4937]: I0123 06:52:45.924413 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 23 06:52:45 crc kubenswrapper[4937]: I0123 06:52:45.933982 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bcd8bcc77-cjvkj"] Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.008777 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-dns-svc\") pod \"dnsmasq-dns-7bcd8bcc77-cjvkj\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.008934 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-ovsdbserver-nb\") pod \"dnsmasq-dns-7bcd8bcc77-cjvkj\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.009038 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-ovsdbserver-sb\") pod \"dnsmasq-dns-7bcd8bcc77-cjvkj\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.009144 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdsts\" (UniqueName: \"kubernetes.io/projected/80ddcc64-b4cf-4721-9091-bad2da0a933a-kube-api-access-wdsts\") pod \"dnsmasq-dns-6ff579b49f-7cxx7\" (UID: \"80ddcc64-b4cf-4721-9091-bad2da0a933a\") " pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.009214 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq7bs\" (UniqueName: \"kubernetes.io/projected/3691b989-e659-420b-adad-bbf7ac2406cc-kube-api-access-cq7bs\") pod \"dnsmasq-dns-7bcd8bcc77-cjvkj\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.009243 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80ddcc64-b4cf-4721-9091-bad2da0a933a-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff579b49f-7cxx7\" (UID: \"80ddcc64-b4cf-4721-9091-bad2da0a933a\") " pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.009291 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ddcc64-b4cf-4721-9091-bad2da0a933a-config\") pod \"dnsmasq-dns-6ff579b49f-7cxx7\" (UID: \"80ddcc64-b4cf-4721-9091-bad2da0a933a\") " pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.009360 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80ddcc64-b4cf-4721-9091-bad2da0a933a-dns-svc\") pod \"dnsmasq-dns-6ff579b49f-7cxx7\" (UID: \"80ddcc64-b4cf-4721-9091-bad2da0a933a\") " pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.009406 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-config\") pod \"dnsmasq-dns-7bcd8bcc77-cjvkj\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.010672 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80ddcc64-b4cf-4721-9091-bad2da0a933a-ovsdbserver-sb\") pod \"dnsmasq-dns-6ff579b49f-7cxx7\" (UID: \"80ddcc64-b4cf-4721-9091-bad2da0a933a\") " pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.010683 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ddcc64-b4cf-4721-9091-bad2da0a933a-config\") pod \"dnsmasq-dns-6ff579b49f-7cxx7\" (UID: \"80ddcc64-b4cf-4721-9091-bad2da0a933a\") " pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.011983 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80ddcc64-b4cf-4721-9091-bad2da0a933a-dns-svc\") pod \"dnsmasq-dns-6ff579b49f-7cxx7\" (UID: \"80ddcc64-b4cf-4721-9091-bad2da0a933a\") " pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.031525 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdsts\" (UniqueName: \"kubernetes.io/projected/80ddcc64-b4cf-4721-9091-bad2da0a933a-kube-api-access-wdsts\") pod \"dnsmasq-dns-6ff579b49f-7cxx7\" (UID: \"80ddcc64-b4cf-4721-9091-bad2da0a933a\") " pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.037067 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.110422 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-dns-svc\") pod \"dnsmasq-dns-7bcd8bcc77-cjvkj\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.110507 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-ovsdbserver-nb\") pod \"dnsmasq-dns-7bcd8bcc77-cjvkj\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.110552 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-ovsdbserver-sb\") pod \"dnsmasq-dns-7bcd8bcc77-cjvkj\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.110627 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq7bs\" (UniqueName: \"kubernetes.io/projected/3691b989-e659-420b-adad-bbf7ac2406cc-kube-api-access-cq7bs\") pod \"dnsmasq-dns-7bcd8bcc77-cjvkj\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.110683 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-config\") pod \"dnsmasq-dns-7bcd8bcc77-cjvkj\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.111349 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-dns-svc\") pod \"dnsmasq-dns-7bcd8bcc77-cjvkj\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.111849 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-config\") pod \"dnsmasq-dns-7bcd8bcc77-cjvkj\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.112081 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-ovsdbserver-sb\") pod \"dnsmasq-dns-7bcd8bcc77-cjvkj\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.112473 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-ovsdbserver-nb\") pod \"dnsmasq-dns-7bcd8bcc77-cjvkj\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.137871 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq7bs\" (UniqueName: \"kubernetes.io/projected/3691b989-e659-420b-adad-bbf7ac2406cc-kube-api-access-cq7bs\") pod \"dnsmasq-dns-7bcd8bcc77-cjvkj\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:52:46 crc kubenswrapper[4937]: I0123 06:52:46.244800 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:52:49 crc kubenswrapper[4937]: I0123 06:52:49.141893 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff579b49f-7cxx7"] Jan 23 06:52:49 crc kubenswrapper[4937]: I0123 06:52:49.192009 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bcd8bcc77-cjvkj"] Jan 23 06:52:49 crc kubenswrapper[4937]: W0123 06:52:49.523666 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80ddcc64_b4cf_4721_9091_bad2da0a933a.slice/crio-c79870373a35b0a1f58f57550ca507ea57982c9584384e74a72f0d714d7535c2 WatchSource:0}: Error finding container c79870373a35b0a1f58f57550ca507ea57982c9584384e74a72f0d714d7535c2: Status 404 returned error can't find the container with id c79870373a35b0a1f58f57550ca507ea57982c9584384e74a72f0d714d7535c2 Jan 23 06:52:49 crc kubenswrapper[4937]: I0123 06:52:49.590031 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" event={"ID":"80ddcc64-b4cf-4721-9091-bad2da0a933a","Type":"ContainerStarted","Data":"c79870373a35b0a1f58f57550ca507ea57982c9584384e74a72f0d714d7535c2"} Jan 23 06:52:49 crc kubenswrapper[4937]: I0123 06:52:49.591220 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" event={"ID":"3691b989-e659-420b-adad-bbf7ac2406cc","Type":"ContainerStarted","Data":"bd530326a3bd08f4ed721b8a2c7472dee472462cd5898a3dd23addbf1e8fd643"} Jan 23 06:52:49 crc kubenswrapper[4937]: E0123 06:52:49.683807 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 23 06:52:49 crc kubenswrapper[4937]: E0123 06:52:49.683874 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 23 06:52:49 crc kubenswrapper[4937]: E0123 06:52:49.684080 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6g87q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(75b79f91-7f35-4e37-9fd8-2ada0ad723df): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Jan 23 06:52:49 crc kubenswrapper[4937]: E0123 06:52:49.685246 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="75b79f91-7f35-4e37-9fd8-2ada0ad723df" Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.271186 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fb8bs" podUID="f7c0166f-0553-4d86-bf1f-19bdcfaea146" containerName="ovn-controller" probeResult="failure" output=< Jan 23 06:52:50 crc kubenswrapper[4937]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 06:52:50 crc kubenswrapper[4937]: > Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.598342 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cdd3a96c-6f65-4f39-b435-78f7ceed08b5","Type":"ContainerStarted","Data":"068a22dbd83f0dd7241fd7946ef946e1c9ff7e606833067b7700379609d51d2d"} Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.601487 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f256fcd3-0094-4316-acac-5cc6424f12d0","Type":"ContainerStarted","Data":"df33d906da0f28f963baba73b229db70260a78efff0eb97f0285329caccad047"} Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.606092 4937 generic.go:334] "Generic (PLEG): container finished" podID="ffd88816-be2b-4c27-a5f5-a061d44c8e63" containerID="29b485654cc32fcff657a923b08e7da17c3c3c389d1f3c6b32c5fd1f19737978" exitCode=0 Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.606166 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb9cb7945-t6mcj" event={"ID":"ffd88816-be2b-4c27-a5f5-a061d44c8e63","Type":"ContainerDied","Data":"29b485654cc32fcff657a923b08e7da17c3c3c389d1f3c6b32c5fd1f19737978"} Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.608806 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4665ea5d-7191-40c4-bd96-5c1b48cf97a2","Type":"ContainerStarted","Data":"ccf6561a281b00d1b1d896089cf60f49b418d322fcdd5fce8004d4a2ac5046dd"} Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.609609 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.613163 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1d5e690b-35ba-4305-b976-ede5fab8e117","Type":"ContainerStarted","Data":"fb2b567487b11ccb56519ca13fe578858d1388c946ed12d5caefd68ba8742e88"} Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.614585 4937 generic.go:334] "Generic (PLEG): container finished" podID="3691b989-e659-420b-adad-bbf7ac2406cc" containerID="e142c41f7445924230e86d3ab498e65f730c3bcf77029784e20d94ad84acdacf" exitCode=0 Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.614717 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" event={"ID":"3691b989-e659-420b-adad-bbf7ac2406cc","Type":"ContainerDied","Data":"e142c41f7445924230e86d3ab498e65f730c3bcf77029784e20d94ad84acdacf"} Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.619752 4937 generic.go:334] "Generic (PLEG): container finished" podID="02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03" containerID="e745b981ea51f1d87e28d7b5a0dcae61c5b25d7f6c27260ca7a7820ed413f03e" exitCode=0 Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.619889 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-db4cc579f-z4cj4" event={"ID":"02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03","Type":"ContainerDied","Data":"e745b981ea51f1d87e28d7b5a0dcae61c5b25d7f6c27260ca7a7820ed413f03e"} Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.627499 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-x9lg6" event={"ID":"3407cde4-142f-499c-95e0-22eb2c91ea92","Type":"ContainerStarted","Data":"f4e2eae6149d7879a558a9611ac2fd0785bede9df334c0a0ccd11f68b8ac2c97"} Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.632161 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4fa3b925-02fe-4fbf-a441-98aaf94ed191","Type":"ContainerStarted","Data":"98773b09049d57c59987f9eaf56e7f9837118dd6b748a355fff3d0d76f64a5a1"} Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.633688 4937 generic.go:334] "Generic (PLEG): container finished" podID="80ddcc64-b4cf-4721-9091-bad2da0a933a" containerID="2739596eb4e338e4e34ca74c0e171ba62655d21dbbc73c31f18251d7d3bca80f" exitCode=0 Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.633726 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" event={"ID":"80ddcc64-b4cf-4721-9091-bad2da0a933a","Type":"ContainerDied","Data":"2739596eb4e338e4e34ca74c0e171ba62655d21dbbc73c31f18251d7d3bca80f"} Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.644509 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=4.874114072 podStartE2EDuration="2m6.644489641s" podCreationTimestamp="2026-01-23 06:50:44 +0000 UTC" firstStartedPulling="2026-01-23 06:50:45.845809485 +0000 UTC m=+1045.649576138" lastFinishedPulling="2026-01-23 06:52:47.616185054 +0000 UTC m=+1167.419951707" observedRunningTime="2026-01-23 06:52:50.644485451 +0000 UTC m=+1170.448252114" watchObservedRunningTime="2026-01-23 06:52:50.644489641 +0000 UTC m=+1170.448256294" Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.701546 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=65.049453998 podStartE2EDuration="1m58.701526263s" podCreationTimestamp="2026-01-23 06:50:52 +0000 UTC" firstStartedPulling="2026-01-23 06:51:54.793461154 +0000 UTC m=+1114.597227807" lastFinishedPulling="2026-01-23 06:52:48.445533379 +0000 UTC m=+1168.249300072" observedRunningTime="2026-01-23 06:52:50.700367922 +0000 UTC m=+1170.504134585" watchObservedRunningTime="2026-01-23 06:52:50.701526263 +0000 UTC m=+1170.505292916" Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.797569 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-x9lg6" podStartSLOduration=65.639885653 podStartE2EDuration="1m58.797546901s" podCreationTimestamp="2026-01-23 06:50:52 +0000 UTC" firstStartedPulling="2026-01-23 06:51:54.458554947 +0000 UTC m=+1114.262321600" lastFinishedPulling="2026-01-23 06:52:47.616216195 +0000 UTC m=+1167.419982848" observedRunningTime="2026-01-23 06:52:50.766430701 +0000 UTC m=+1170.570197364" watchObservedRunningTime="2026-01-23 06:52:50.797546901 +0000 UTC m=+1170.601313564" Jan 23 06:52:50 crc kubenswrapper[4937]: I0123 06:52:50.799448 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=67.826148199 podStartE2EDuration="2m1.79943844s" podCreationTimestamp="2026-01-23 06:50:49 +0000 UTC" firstStartedPulling="2026-01-23 06:51:54.460449797 +0000 UTC m=+1114.264216440" lastFinishedPulling="2026-01-23 06:52:48.433739988 +0000 UTC m=+1168.237506681" observedRunningTime="2026-01-23 06:52:50.787797774 +0000 UTC m=+1170.591564427" watchObservedRunningTime="2026-01-23 06:52:50.79943844 +0000 UTC m=+1170.603205093" Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.104139 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.104231 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.146570 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.198875 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.237308 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.644340 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c22daa68-7c34-4180-adcc-d939bfa5a607","Type":"ContainerStarted","Data":"1061c71e90e3c339192a8927fc1c04e92e12d63bd856ba64167558bd0307450c"} Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.648795 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6","Type":"ContainerStarted","Data":"ed4d6735eacad8b5669c077204ba76ee680315c08659282ac0774ec434cbe540"} Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.651445 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" event={"ID":"3691b989-e659-420b-adad-bbf7ac2406cc","Type":"ContainerStarted","Data":"f218060b65c3e980462e883b1cb40ee8cd1ceecc6574a961439567a0e8c103b1"} Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.651654 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.653120 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e","Type":"ContainerStarted","Data":"93a6e67502750b306bf7e3beb72bab1c0ae18bd87867fc23d07a2735becce099"} Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.653552 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.718422 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.721891 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.743358 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" podStartSLOduration=6.74333592 podStartE2EDuration="6.74333592s" podCreationTimestamp="2026-01-23 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:52:51.730987085 +0000 UTC m=+1171.534753758" watchObservedRunningTime="2026-01-23 06:52:51.74333592 +0000 UTC m=+1171.547102593" Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.788695 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb9cb7945-t6mcj" Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.794404 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-db4cc579f-z4cj4" Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.940432 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03-dns-svc\") pod \"02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03\" (UID: \"02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03\") " Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.940500 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03-config\") pod \"02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03\" (UID: \"02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03\") " Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.940530 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffd88816-be2b-4c27-a5f5-a061d44c8e63-config\") pod \"ffd88816-be2b-4c27-a5f5-a061d44c8e63\" (UID: \"ffd88816-be2b-4c27-a5f5-a061d44c8e63\") " Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.940646 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffd88816-be2b-4c27-a5f5-a061d44c8e63-dns-svc\") pod \"ffd88816-be2b-4c27-a5f5-a061d44c8e63\" (UID: \"ffd88816-be2b-4c27-a5f5-a061d44c8e63\") " Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.940687 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldwzj\" (UniqueName: \"kubernetes.io/projected/02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03-kube-api-access-ldwzj\") pod \"02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03\" (UID: \"02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03\") " Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.940723 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr2dj\" (UniqueName: \"kubernetes.io/projected/ffd88816-be2b-4c27-a5f5-a061d44c8e63-kube-api-access-jr2dj\") pod \"ffd88816-be2b-4c27-a5f5-a061d44c8e63\" (UID: \"ffd88816-be2b-4c27-a5f5-a061d44c8e63\") " Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.974430 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffd88816-be2b-4c27-a5f5-a061d44c8e63-kube-api-access-jr2dj" (OuterVolumeSpecName: "kube-api-access-jr2dj") pod "ffd88816-be2b-4c27-a5f5-a061d44c8e63" (UID: "ffd88816-be2b-4c27-a5f5-a061d44c8e63"). InnerVolumeSpecName "kube-api-access-jr2dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:52:51 crc kubenswrapper[4937]: I0123 06:52:51.986813 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03-kube-api-access-ldwzj" (OuterVolumeSpecName: "kube-api-access-ldwzj") pod "02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03" (UID: "02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03"). InnerVolumeSpecName "kube-api-access-ldwzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.012372 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffd88816-be2b-4c27-a5f5-a061d44c8e63-config" (OuterVolumeSpecName: "config") pod "ffd88816-be2b-4c27-a5f5-a061d44c8e63" (UID: "ffd88816-be2b-4c27-a5f5-a061d44c8e63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.023085 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffd88816-be2b-4c27-a5f5-a061d44c8e63-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ffd88816-be2b-4c27-a5f5-a061d44c8e63" (UID: "ffd88816-be2b-4c27-a5f5-a061d44c8e63"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.027658 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03" (UID: "02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.029537 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03-config" (OuterVolumeSpecName: "config") pod "02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03" (UID: "02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.042687 4937 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffd88816-be2b-4c27-a5f5-a061d44c8e63-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.042717 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldwzj\" (UniqueName: \"kubernetes.io/projected/02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03-kube-api-access-ldwzj\") on node \"crc\" DevicePath \"\"" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.042726 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr2dj\" (UniqueName: \"kubernetes.io/projected/ffd88816-be2b-4c27-a5f5-a061d44c8e63-kube-api-access-jr2dj\") on node \"crc\" DevicePath \"\"" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.042735 4937 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.042744 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.042773 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffd88816-be2b-4c27-a5f5-a061d44c8e63-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.172346 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 23 06:52:52 crc kubenswrapper[4937]: E0123 06:52:52.172688 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03" containerName="init" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.172705 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03" containerName="init" Jan 23 06:52:52 crc kubenswrapper[4937]: E0123 06:52:52.172736 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffd88816-be2b-4c27-a5f5-a061d44c8e63" containerName="init" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.172742 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffd88816-be2b-4c27-a5f5-a061d44c8e63" containerName="init" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.172905 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03" containerName="init" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.172919 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffd88816-be2b-4c27-a5f5-a061d44c8e63" containerName="init" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.173764 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.176077 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.176328 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.176790 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-cp9hz" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.177975 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.225466 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.246646 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-scripts\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.246699 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.246723 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.246755 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.246794 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k59dm\" (UniqueName: \"kubernetes.io/projected/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-kube-api-access-k59dm\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.246873 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-config\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.246888 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.254065 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-fb8bs-config-gpdpw"] Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.255148 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.264713 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.265868 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fb8bs-config-gpdpw"] Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.348062 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-var-run-ovn\") pod \"ovn-controller-fb8bs-config-gpdpw\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.348124 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpncj\" (UniqueName: \"kubernetes.io/projected/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-kube-api-access-wpncj\") pod \"ovn-controller-fb8bs-config-gpdpw\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.348156 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-config\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.348175 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.348202 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-var-run\") pod \"ovn-controller-fb8bs-config-gpdpw\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.348238 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-scripts\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.348255 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.348271 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-var-log-ovn\") pod \"ovn-controller-fb8bs-config-gpdpw\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.348294 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-additional-scripts\") pod \"ovn-controller-fb8bs-config-gpdpw\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.348314 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.348343 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.348374 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k59dm\" (UniqueName: \"kubernetes.io/projected/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-kube-api-access-k59dm\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.348406 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-scripts\") pod \"ovn-controller-fb8bs-config-gpdpw\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.349207 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-config\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.349452 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.350016 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-scripts\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.354454 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.365406 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.366030 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.382085 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k59dm\" (UniqueName: \"kubernetes.io/projected/8abcdfaa-b5e9-416c-8c8c-91f424ee3c71-kube-api-access-k59dm\") pod \"ovn-northd-0\" (UID: \"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71\") " pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.450573 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-scripts\") pod \"ovn-controller-fb8bs-config-gpdpw\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.450638 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-var-run-ovn\") pod \"ovn-controller-fb8bs-config-gpdpw\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.450688 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpncj\" (UniqueName: \"kubernetes.io/projected/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-kube-api-access-wpncj\") pod \"ovn-controller-fb8bs-config-gpdpw\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.450746 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-var-run\") pod \"ovn-controller-fb8bs-config-gpdpw\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.450793 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-var-log-ovn\") pod \"ovn-controller-fb8bs-config-gpdpw\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.450829 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-additional-scripts\") pod \"ovn-controller-fb8bs-config-gpdpw\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.451248 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-var-run-ovn\") pod \"ovn-controller-fb8bs-config-gpdpw\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.451250 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-var-log-ovn\") pod \"ovn-controller-fb8bs-config-gpdpw\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.451324 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-var-run\") pod \"ovn-controller-fb8bs-config-gpdpw\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.451744 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-additional-scripts\") pod \"ovn-controller-fb8bs-config-gpdpw\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.452477 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-scripts\") pod \"ovn-controller-fb8bs-config-gpdpw\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.470788 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpncj\" (UniqueName: \"kubernetes.io/projected/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-kube-api-access-wpncj\") pod \"ovn-controller-fb8bs-config-gpdpw\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.518217 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.579880 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.673433 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" event={"ID":"80ddcc64-b4cf-4721-9091-bad2da0a933a","Type":"ContainerStarted","Data":"abf8cc64feff1e08f51c99a38971dddde8bffc9321c256d7b0295119aa0ada63"} Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.673511 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.674414 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb9cb7945-t6mcj" event={"ID":"ffd88816-be2b-4c27-a5f5-a061d44c8e63","Type":"ContainerDied","Data":"7a91ffed56891a5fb54446589fcd01ad26e15cde4dd8758e3decf607a3611bd8"} Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.674450 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb9cb7945-t6mcj" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.674465 4937 scope.go:117] "RemoveContainer" containerID="29b485654cc32fcff657a923b08e7da17c3c3c389d1f3c6b32c5fd1f19737978" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.681472 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"32a4f804-e737-4bf8-b092-b20127604273","Type":"ContainerStarted","Data":"b04ebbb333d4f10ff1ada90bae837ff31fbc2c42a92578f91421af008c398fda"} Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.683916 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-db4cc579f-z4cj4" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.684026 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-db4cc579f-z4cj4" event={"ID":"02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03","Type":"ContainerDied","Data":"3f9b293bfd69784b69a9623de0bfabd752aee8148569a48109ddadcc49b23b1c"} Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.700246 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" podStartSLOduration=7.700223243 podStartE2EDuration="7.700223243s" podCreationTimestamp="2026-01-23 06:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:52:52.69177252 +0000 UTC m=+1172.495539173" watchObservedRunningTime="2026-01-23 06:52:52.700223243 +0000 UTC m=+1172.503989896" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.758418 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fb9cb7945-t6mcj"] Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.763892 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fb9cb7945-t6mcj"] Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.776618 4937 scope.go:117] "RemoveContainer" containerID="e745b981ea51f1d87e28d7b5a0dcae61c5b25d7f6c27260ca7a7820ed413f03e" Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.805285 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-db4cc579f-z4cj4"] Jan 23 06:52:52 crc kubenswrapper[4937]: I0123 06:52:52.809438 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-db4cc579f-z4cj4"] Jan 23 06:52:53 crc kubenswrapper[4937]: I0123 06:52:53.055574 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 06:52:53 crc kubenswrapper[4937]: W0123 06:52:53.070639 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8abcdfaa_b5e9_416c_8c8c_91f424ee3c71.slice/crio-a6ee060de513f3c30bfcf22133a19238a3e8d03b1be0e1e03f2a40c3887cb23a WatchSource:0}: Error finding container a6ee060de513f3c30bfcf22133a19238a3e8d03b1be0e1e03f2a40c3887cb23a: Status 404 returned error can't find the container with id a6ee060de513f3c30bfcf22133a19238a3e8d03b1be0e1e03f2a40c3887cb23a Jan 23 06:52:53 crc kubenswrapper[4937]: I0123 06:52:53.173631 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fb8bs-config-gpdpw"] Jan 23 06:52:53 crc kubenswrapper[4937]: W0123 06:52:53.183957 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02f3474d_ec18_4f96_bc9a_b5bdbbf89e1f.slice/crio-2aa042903ec94a3a168d123a44c15c8672ff95c8b5a3eeb516ba3d4e0a61500b WatchSource:0}: Error finding container 2aa042903ec94a3a168d123a44c15c8672ff95c8b5a3eeb516ba3d4e0a61500b: Status 404 returned error can't find the container with id 2aa042903ec94a3a168d123a44c15c8672ff95c8b5a3eeb516ba3d4e0a61500b Jan 23 06:52:53 crc kubenswrapper[4937]: I0123 06:52:53.692792 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fb8bs-config-gpdpw" event={"ID":"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f","Type":"ContainerStarted","Data":"c78dd2c4ac4ca5173544ae9c6389a8cb3fa42c87c1ef8b31a3fa7f3b2491ab8f"} Jan 23 06:52:53 crc kubenswrapper[4937]: I0123 06:52:53.693740 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fb8bs-config-gpdpw" event={"ID":"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f","Type":"ContainerStarted","Data":"2aa042903ec94a3a168d123a44c15c8672ff95c8b5a3eeb516ba3d4e0a61500b"} Jan 23 06:52:53 crc kubenswrapper[4937]: I0123 06:52:53.696056 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71","Type":"ContainerStarted","Data":"a6ee060de513f3c30bfcf22133a19238a3e8d03b1be0e1e03f2a40c3887cb23a"} Jan 23 06:52:53 crc kubenswrapper[4937]: I0123 06:52:53.715891 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-fb8bs-config-gpdpw" podStartSLOduration=1.7158746520000001 podStartE2EDuration="1.715874652s" podCreationTimestamp="2026-01-23 06:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:52:53.711260561 +0000 UTC m=+1173.515027234" watchObservedRunningTime="2026-01-23 06:52:53.715874652 +0000 UTC m=+1173.519641305" Jan 23 06:52:54 crc kubenswrapper[4937]: I0123 06:52:54.544855 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03" path="/var/lib/kubelet/pods/02bfae3d-60cb-47c2-8d4c-1ccefbdd3a03/volumes" Jan 23 06:52:54 crc kubenswrapper[4937]: I0123 06:52:54.547357 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffd88816-be2b-4c27-a5f5-a061d44c8e63" path="/var/lib/kubelet/pods/ffd88816-be2b-4c27-a5f5-a061d44c8e63/volumes" Jan 23 06:52:54 crc kubenswrapper[4937]: I0123 06:52:54.706635 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71","Type":"ContainerStarted","Data":"b1b2dc1ce09f6b5753f59d5ce4a649d7820a3f1dfef339140eeb828bc90f342b"} Jan 23 06:52:54 crc kubenswrapper[4937]: I0123 06:52:54.708042 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8abcdfaa-b5e9-416c-8c8c-91f424ee3c71","Type":"ContainerStarted","Data":"f935cf43de564a69f1820b665e835ab408552b1860bbcd7d1a771221f49b3d9a"} Jan 23 06:52:54 crc kubenswrapper[4937]: I0123 06:52:54.708182 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 23 06:52:54 crc kubenswrapper[4937]: I0123 06:52:54.709786 4937 generic.go:334] "Generic (PLEG): container finished" podID="02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f" containerID="c78dd2c4ac4ca5173544ae9c6389a8cb3fa42c87c1ef8b31a3fa7f3b2491ab8f" exitCode=0 Jan 23 06:52:54 crc kubenswrapper[4937]: I0123 06:52:54.710838 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fb8bs-config-gpdpw" event={"ID":"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f","Type":"ContainerDied","Data":"c78dd2c4ac4ca5173544ae9c6389a8cb3fa42c87c1ef8b31a3fa7f3b2491ab8f"} Jan 23 06:52:54 crc kubenswrapper[4937]: I0123 06:52:54.727161 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.150246358 podStartE2EDuration="2.727141206s" podCreationTimestamp="2026-01-23 06:52:52 +0000 UTC" firstStartedPulling="2026-01-23 06:52:53.072541784 +0000 UTC m=+1172.876308437" lastFinishedPulling="2026-01-23 06:52:53.649436632 +0000 UTC m=+1173.453203285" observedRunningTime="2026-01-23 06:52:54.725287767 +0000 UTC m=+1174.529054440" watchObservedRunningTime="2026-01-23 06:52:54.727141206 +0000 UTC m=+1174.530907869" Jan 23 06:52:55 crc kubenswrapper[4937]: I0123 06:52:55.108904 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 23 06:52:55 crc kubenswrapper[4937]: I0123 06:52:55.242519 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-fb8bs" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.080780 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.224522 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-var-log-ovn\") pod \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.224575 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-var-run-ovn\") pod \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.224655 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpncj\" (UniqueName: \"kubernetes.io/projected/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-kube-api-access-wpncj\") pod \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.224711 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-scripts\") pod \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.224779 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-var-run\") pod \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.224844 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-additional-scripts\") pod \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\" (UID: \"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f\") " Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.224885 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f" (UID: "02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.225021 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f" (UID: "02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.225196 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-var-run" (OuterVolumeSpecName: "var-run") pod "02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f" (UID: "02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.225620 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f" (UID: "02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.225781 4937 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.225801 4937 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.225812 4937 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.225827 4937 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.225950 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-scripts" (OuterVolumeSpecName: "scripts") pod "02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f" (UID: "02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.230362 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-kube-api-access-wpncj" (OuterVolumeSpecName: "kube-api-access-wpncj") pod "02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f" (UID: "02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f"). InnerVolumeSpecName "kube-api-access-wpncj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.247586 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.294219 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff579b49f-7cxx7"] Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.294436 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" podUID="80ddcc64-b4cf-4721-9091-bad2da0a933a" containerName="dnsmasq-dns" containerID="cri-o://abf8cc64feff1e08f51c99a38971dddde8bffc9321c256d7b0295119aa0ada63" gracePeriod=10 Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.328865 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpncj\" (UniqueName: \"kubernetes.io/projected/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-kube-api-access-wpncj\") on node \"crc\" DevicePath \"\"" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.328901 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.637142 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b6b6845ff-5j2pc"] Jan 23 06:52:56 crc kubenswrapper[4937]: E0123 06:52:56.637721 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f" containerName="ovn-config" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.637735 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f" containerName="ovn-config" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.637893 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f" containerName="ovn-config" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.638922 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.659468 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b6b6845ff-5j2pc"] Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.734489 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-config\") pod \"dnsmasq-dns-b6b6845ff-5j2pc\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.734553 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-ovsdbserver-sb\") pod \"dnsmasq-dns-b6b6845ff-5j2pc\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.734635 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-ovsdbserver-nb\") pod \"dnsmasq-dns-b6b6845ff-5j2pc\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.734662 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rdv2\" (UniqueName: \"kubernetes.io/projected/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-kube-api-access-2rdv2\") pod \"dnsmasq-dns-b6b6845ff-5j2pc\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.734701 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-dns-svc\") pod \"dnsmasq-dns-b6b6845ff-5j2pc\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.737263 4937 generic.go:334] "Generic (PLEG): container finished" podID="f256fcd3-0094-4316-acac-5cc6424f12d0" containerID="df33d906da0f28f963baba73b229db70260a78efff0eb97f0285329caccad047" exitCode=0 Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.737314 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f256fcd3-0094-4316-acac-5cc6424f12d0","Type":"ContainerDied","Data":"df33d906da0f28f963baba73b229db70260a78efff0eb97f0285329caccad047"} Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.777537 4937 generic.go:334] "Generic (PLEG): container finished" podID="80ddcc64-b4cf-4721-9091-bad2da0a933a" containerID="abf8cc64feff1e08f51c99a38971dddde8bffc9321c256d7b0295119aa0ada63" exitCode=0 Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.777628 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" event={"ID":"80ddcc64-b4cf-4721-9091-bad2da0a933a","Type":"ContainerDied","Data":"abf8cc64feff1e08f51c99a38971dddde8bffc9321c256d7b0295119aa0ada63"} Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.779717 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fb8bs-config-gpdpw" event={"ID":"02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f","Type":"ContainerDied","Data":"2aa042903ec94a3a168d123a44c15c8672ff95c8b5a3eeb516ba3d4e0a61500b"} Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.779743 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aa042903ec94a3a168d123a44c15c8672ff95c8b5a3eeb516ba3d4e0a61500b" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.779792 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fb8bs-config-gpdpw" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.814349 4937 generic.go:334] "Generic (PLEG): container finished" podID="cdd3a96c-6f65-4f39-b435-78f7ceed08b5" containerID="068a22dbd83f0dd7241fd7946ef946e1c9ff7e606833067b7700379609d51d2d" exitCode=0 Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.814718 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cdd3a96c-6f65-4f39-b435-78f7ceed08b5","Type":"ContainerDied","Data":"068a22dbd83f0dd7241fd7946ef946e1c9ff7e606833067b7700379609d51d2d"} Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.835900 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-ovsdbserver-nb\") pod \"dnsmasq-dns-b6b6845ff-5j2pc\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.835937 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rdv2\" (UniqueName: \"kubernetes.io/projected/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-kube-api-access-2rdv2\") pod \"dnsmasq-dns-b6b6845ff-5j2pc\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.835990 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-dns-svc\") pod \"dnsmasq-dns-b6b6845ff-5j2pc\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.836059 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-config\") pod \"dnsmasq-dns-b6b6845ff-5j2pc\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.836108 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-ovsdbserver-sb\") pod \"dnsmasq-dns-b6b6845ff-5j2pc\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.837358 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-ovsdbserver-nb\") pod \"dnsmasq-dns-b6b6845ff-5j2pc\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.838697 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-ovsdbserver-sb\") pod \"dnsmasq-dns-b6b6845ff-5j2pc\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.847088 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-dns-svc\") pod \"dnsmasq-dns-b6b6845ff-5j2pc\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.856189 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-config\") pod \"dnsmasq-dns-b6b6845ff-5j2pc\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:52:56 crc kubenswrapper[4937]: I0123 06:52:56.884234 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rdv2\" (UniqueName: \"kubernetes.io/projected/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-kube-api-access-2rdv2\") pod \"dnsmasq-dns-b6b6845ff-5j2pc\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.017345 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-fb8bs-config-gpdpw"] Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.017939 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.031793 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-fb8bs-config-gpdpw"] Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.207927 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.279129 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80ddcc64-b4cf-4721-9091-bad2da0a933a-dns-svc\") pod \"80ddcc64-b4cf-4721-9091-bad2da0a933a\" (UID: \"80ddcc64-b4cf-4721-9091-bad2da0a933a\") " Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.279617 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80ddcc64-b4cf-4721-9091-bad2da0a933a-ovsdbserver-sb\") pod \"80ddcc64-b4cf-4721-9091-bad2da0a933a\" (UID: \"80ddcc64-b4cf-4721-9091-bad2da0a933a\") " Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.279751 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ddcc64-b4cf-4721-9091-bad2da0a933a-config\") pod \"80ddcc64-b4cf-4721-9091-bad2da0a933a\" (UID: \"80ddcc64-b4cf-4721-9091-bad2da0a933a\") " Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.279796 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdsts\" (UniqueName: \"kubernetes.io/projected/80ddcc64-b4cf-4721-9091-bad2da0a933a-kube-api-access-wdsts\") pod \"80ddcc64-b4cf-4721-9091-bad2da0a933a\" (UID: \"80ddcc64-b4cf-4721-9091-bad2da0a933a\") " Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.308138 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80ddcc64-b4cf-4721-9091-bad2da0a933a-kube-api-access-wdsts" (OuterVolumeSpecName: "kube-api-access-wdsts") pod "80ddcc64-b4cf-4721-9091-bad2da0a933a" (UID: "80ddcc64-b4cf-4721-9091-bad2da0a933a"). InnerVolumeSpecName "kube-api-access-wdsts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.350257 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80ddcc64-b4cf-4721-9091-bad2da0a933a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "80ddcc64-b4cf-4721-9091-bad2da0a933a" (UID: "80ddcc64-b4cf-4721-9091-bad2da0a933a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.358912 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80ddcc64-b4cf-4721-9091-bad2da0a933a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "80ddcc64-b4cf-4721-9091-bad2da0a933a" (UID: "80ddcc64-b4cf-4721-9091-bad2da0a933a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.365156 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80ddcc64-b4cf-4721-9091-bad2da0a933a-config" (OuterVolumeSpecName: "config") pod "80ddcc64-b4cf-4721-9091-bad2da0a933a" (UID: "80ddcc64-b4cf-4721-9091-bad2da0a933a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.381714 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80ddcc64-b4cf-4721-9091-bad2da0a933a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.381760 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ddcc64-b4cf-4721-9091-bad2da0a933a-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.381774 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdsts\" (UniqueName: \"kubernetes.io/projected/80ddcc64-b4cf-4721-9091-bad2da0a933a-kube-api-access-wdsts\") on node \"crc\" DevicePath \"\"" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.381786 4937 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80ddcc64-b4cf-4721-9091-bad2da0a933a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.707088 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b6b6845ff-5j2pc"] Jan 23 06:52:57 crc kubenswrapper[4937]: W0123 06:52:57.711629 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda51817c1_3ea7_4f5e_801b_10c2d04fa99e.slice/crio-3dc3c2c5b8af7dd62d7efbf9ab18fa9a0d6809286ce52e2a1ebb01a014e6a3da WatchSource:0}: Error finding container 3dc3c2c5b8af7dd62d7efbf9ab18fa9a0d6809286ce52e2a1ebb01a014e6a3da: Status 404 returned error can't find the container with id 3dc3c2c5b8af7dd62d7efbf9ab18fa9a0d6809286ce52e2a1ebb01a014e6a3da Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.827463 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.827464 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff579b49f-7cxx7" event={"ID":"80ddcc64-b4cf-4721-9091-bad2da0a933a","Type":"ContainerDied","Data":"c79870373a35b0a1f58f57550ca507ea57982c9584384e74a72f0d714d7535c2"} Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.827854 4937 scope.go:117] "RemoveContainer" containerID="abf8cc64feff1e08f51c99a38971dddde8bffc9321c256d7b0295119aa0ada63" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.835537 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cdd3a96c-6f65-4f39-b435-78f7ceed08b5","Type":"ContainerStarted","Data":"3499a9f74eabd6bb479e27fe4023ac25f8eff04604a1d2254c6b1ec55ac4cb09"} Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.840683 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f256fcd3-0094-4316-acac-5cc6424f12d0","Type":"ContainerStarted","Data":"40b5a6800ea5d6be334f794e93e94f0e319d1d1543147135f19e14474654de2a"} Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.842763 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" event={"ID":"a51817c1-3ea7-4f5e-801b-10c2d04fa99e","Type":"ContainerStarted","Data":"3dc3c2c5b8af7dd62d7efbf9ab18fa9a0d6809286ce52e2a1ebb01a014e6a3da"} Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.849810 4937 scope.go:117] "RemoveContainer" containerID="2739596eb4e338e4e34ca74c0e171ba62655d21dbbc73c31f18251d7d3bca80f" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.865923 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=12.355322986000001 podStartE2EDuration="2m16.8658966s" podCreationTimestamp="2026-01-23 06:50:41 +0000 UTC" firstStartedPulling="2026-01-23 06:50:43.502257533 +0000 UTC m=+1043.306024186" lastFinishedPulling="2026-01-23 06:52:48.012831147 +0000 UTC m=+1167.816597800" observedRunningTime="2026-01-23 06:52:57.856998456 +0000 UTC m=+1177.660765109" watchObservedRunningTime="2026-01-23 06:52:57.8658966 +0000 UTC m=+1177.669663253" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.881901 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=12.522906187 podStartE2EDuration="2m15.881882421s" podCreationTimestamp="2026-01-23 06:50:42 +0000 UTC" firstStartedPulling="2026-01-23 06:50:45.084916452 +0000 UTC m=+1044.888683095" lastFinishedPulling="2026-01-23 06:52:48.443892646 +0000 UTC m=+1168.247659329" observedRunningTime="2026-01-23 06:52:57.876978892 +0000 UTC m=+1177.680745555" watchObservedRunningTime="2026-01-23 06:52:57.881882421 +0000 UTC m=+1177.685649074" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.898217 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff579b49f-7cxx7"] Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.907079 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ff579b49f-7cxx7"] Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.927837 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 23 06:52:57 crc kubenswrapper[4937]: E0123 06:52:57.928160 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80ddcc64-b4cf-4721-9091-bad2da0a933a" containerName="dnsmasq-dns" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.928180 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="80ddcc64-b4cf-4721-9091-bad2da0a933a" containerName="dnsmasq-dns" Jan 23 06:52:57 crc kubenswrapper[4937]: E0123 06:52:57.928211 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80ddcc64-b4cf-4721-9091-bad2da0a933a" containerName="init" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.928218 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="80ddcc64-b4cf-4721-9091-bad2da0a933a" containerName="init" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.928386 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="80ddcc64-b4cf-4721-9091-bad2da0a933a" containerName="dnsmasq-dns" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.932909 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.934726 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.934908 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-cc4mc" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.935164 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.935212 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.992920 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e96a6620-5e97-4f3b-95b3-52c8b3161098-cache\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.993052 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfcb6\" (UniqueName: \"kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-kube-api-access-lfcb6\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.993086 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e96a6620-5e97-4f3b-95b3-52c8b3161098-lock\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.993155 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e96a6620-5e97-4f3b-95b3-52c8b3161098-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.993186 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:52:57 crc kubenswrapper[4937]: I0123 06:52:57.993387 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.010046 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.094481 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e96a6620-5e97-4f3b-95b3-52c8b3161098-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.094530 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.094646 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.094686 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e96a6620-5e97-4f3b-95b3-52c8b3161098-cache\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.094722 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfcb6\" (UniqueName: \"kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-kube-api-access-lfcb6\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:52:58 crc kubenswrapper[4937]: E0123 06:52:58.094717 4937 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.094741 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e96a6620-5e97-4f3b-95b3-52c8b3161098-lock\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:52:58 crc kubenswrapper[4937]: E0123 06:52:58.094748 4937 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 06:52:58 crc kubenswrapper[4937]: E0123 06:52:58.094810 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift podName:e96a6620-5e97-4f3b-95b3-52c8b3161098 nodeName:}" failed. No retries permitted until 2026-01-23 06:52:58.594787236 +0000 UTC m=+1178.398553889 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift") pod "swift-storage-0" (UID: "e96a6620-5e97-4f3b-95b3-52c8b3161098") : configmap "swift-ring-files" not found Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.095338 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e96a6620-5e97-4f3b-95b3-52c8b3161098-cache\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.095360 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.095420 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e96a6620-5e97-4f3b-95b3-52c8b3161098-lock\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.099656 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e96a6620-5e97-4f3b-95b3-52c8b3161098-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.112033 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfcb6\" (UniqueName: \"kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-kube-api-access-lfcb6\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.123296 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.501300 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dq8lz"] Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.502862 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.505008 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.505260 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.505802 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.518824 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dq8lz"] Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.547211 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f" path="/var/lib/kubelet/pods/02f3474d-ec18-4f96-bc9a-b5bdbbf89e1f/volumes" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.548195 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80ddcc64-b4cf-4721-9091-bad2da0a933a" path="/var/lib/kubelet/pods/80ddcc64-b4cf-4721-9091-bad2da0a933a/volumes" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.602700 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b9972df0-0d7d-4346-a77c-546a458a1677-ring-data-devices\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.602798 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b9972df0-0d7d-4346-a77c-546a458a1677-etc-swift\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.602848 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.602904 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9972df0-0d7d-4346-a77c-546a458a1677-combined-ca-bundle\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: E0123 06:52:58.603060 4937 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 06:52:58 crc kubenswrapper[4937]: E0123 06:52:58.603089 4937 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 06:52:58 crc kubenswrapper[4937]: E0123 06:52:58.603160 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift podName:e96a6620-5e97-4f3b-95b3-52c8b3161098 nodeName:}" failed. No retries permitted until 2026-01-23 06:52:59.603140489 +0000 UTC m=+1179.406907142 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift") pod "swift-storage-0" (UID: "e96a6620-5e97-4f3b-95b3-52c8b3161098") : configmap "swift-ring-files" not found Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.603201 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b9972df0-0d7d-4346-a77c-546a458a1677-swiftconf\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.603289 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9972df0-0d7d-4346-a77c-546a458a1677-scripts\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.603383 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b9972df0-0d7d-4346-a77c-546a458a1677-dispersionconf\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.603442 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlmmw\" (UniqueName: \"kubernetes.io/projected/b9972df0-0d7d-4346-a77c-546a458a1677-kube-api-access-hlmmw\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.705360 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b9972df0-0d7d-4346-a77c-546a458a1677-ring-data-devices\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.705469 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b9972df0-0d7d-4346-a77c-546a458a1677-etc-swift\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.705535 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9972df0-0d7d-4346-a77c-546a458a1677-combined-ca-bundle\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.705566 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b9972df0-0d7d-4346-a77c-546a458a1677-swiftconf\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.705626 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9972df0-0d7d-4346-a77c-546a458a1677-scripts\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.705663 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b9972df0-0d7d-4346-a77c-546a458a1677-dispersionconf\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.705694 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlmmw\" (UniqueName: \"kubernetes.io/projected/b9972df0-0d7d-4346-a77c-546a458a1677-kube-api-access-hlmmw\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.706401 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b9972df0-0d7d-4346-a77c-546a458a1677-ring-data-devices\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.706509 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b9972df0-0d7d-4346-a77c-546a458a1677-etc-swift\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.707044 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9972df0-0d7d-4346-a77c-546a458a1677-scripts\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.711431 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b9972df0-0d7d-4346-a77c-546a458a1677-dispersionconf\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.711547 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9972df0-0d7d-4346-a77c-546a458a1677-combined-ca-bundle\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.711894 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b9972df0-0d7d-4346-a77c-546a458a1677-swiftconf\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.727423 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlmmw\" (UniqueName: \"kubernetes.io/projected/b9972df0-0d7d-4346-a77c-546a458a1677-kube-api-access-hlmmw\") pod \"swift-ring-rebalance-dq8lz\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.821331 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.852085 4937 generic.go:334] "Generic (PLEG): container finished" podID="a51817c1-3ea7-4f5e-801b-10c2d04fa99e" containerID="f1ba1fbfb564452023f48b86cbe289ab78ea6ff3174f54775aa6bbcf5c5f36e2" exitCode=0 Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.852139 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" event={"ID":"a51817c1-3ea7-4f5e-801b-10c2d04fa99e","Type":"ContainerDied","Data":"f1ba1fbfb564452023f48b86cbe289ab78ea6ff3174f54775aa6bbcf5c5f36e2"} Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.855833 4937 generic.go:334] "Generic (PLEG): container finished" podID="32a4f804-e737-4bf8-b092-b20127604273" containerID="b04ebbb333d4f10ff1ada90bae837ff31fbc2c42a92578f91421af008c398fda" exitCode=0 Jan 23 06:52:58 crc kubenswrapper[4937]: I0123 06:52:58.855979 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"32a4f804-e737-4bf8-b092-b20127604273","Type":"ContainerDied","Data":"b04ebbb333d4f10ff1ada90bae837ff31fbc2c42a92578f91421af008c398fda"} Jan 23 06:52:59 crc kubenswrapper[4937]: I0123 06:52:59.287569 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dq8lz"] Jan 23 06:52:59 crc kubenswrapper[4937]: W0123 06:52:59.294413 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9972df0_0d7d_4346_a77c_546a458a1677.slice/crio-c4f81aca7fb681185adcdc7874b8779136fb9a16182f13cb8e7abaf3115d43d0 WatchSource:0}: Error finding container c4f81aca7fb681185adcdc7874b8779136fb9a16182f13cb8e7abaf3115d43d0: Status 404 returned error can't find the container with id c4f81aca7fb681185adcdc7874b8779136fb9a16182f13cb8e7abaf3115d43d0 Jan 23 06:52:59 crc kubenswrapper[4937]: I0123 06:52:59.622920 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:52:59 crc kubenswrapper[4937]: E0123 06:52:59.623246 4937 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 06:52:59 crc kubenswrapper[4937]: E0123 06:52:59.623295 4937 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 06:52:59 crc kubenswrapper[4937]: E0123 06:52:59.623371 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift podName:e96a6620-5e97-4f3b-95b3-52c8b3161098 nodeName:}" failed. No retries permitted until 2026-01-23 06:53:01.623344938 +0000 UTC m=+1181.427111631 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift") pod "swift-storage-0" (UID: "e96a6620-5e97-4f3b-95b3-52c8b3161098") : configmap "swift-ring-files" not found Jan 23 06:52:59 crc kubenswrapper[4937]: I0123 06:52:59.865227 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dq8lz" event={"ID":"b9972df0-0d7d-4346-a77c-546a458a1677","Type":"ContainerStarted","Data":"c4f81aca7fb681185adcdc7874b8779136fb9a16182f13cb8e7abaf3115d43d0"} Jan 23 06:52:59 crc kubenswrapper[4937]: I0123 06:52:59.867436 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" event={"ID":"a51817c1-3ea7-4f5e-801b-10c2d04fa99e","Type":"ContainerStarted","Data":"3d0521cf144ca35080afdf01c18739f5dfc2ffb4f3090cd8cf7a81d72ac93292"} Jan 23 06:52:59 crc kubenswrapper[4937]: I0123 06:52:59.867599 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:52:59 crc kubenswrapper[4937]: I0123 06:52:59.890505 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" podStartSLOduration=3.890480781 podStartE2EDuration="3.890480781s" podCreationTimestamp="2026-01-23 06:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:52:59.883638961 +0000 UTC m=+1179.687405724" watchObservedRunningTime="2026-01-23 06:52:59.890480781 +0000 UTC m=+1179.694247454" Jan 23 06:53:01 crc kubenswrapper[4937]: I0123 06:53:01.665226 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:53:01 crc kubenswrapper[4937]: E0123 06:53:01.665468 4937 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 06:53:01 crc kubenswrapper[4937]: E0123 06:53:01.665613 4937 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 06:53:01 crc kubenswrapper[4937]: E0123 06:53:01.665667 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift podName:e96a6620-5e97-4f3b-95b3-52c8b3161098 nodeName:}" failed. No retries permitted until 2026-01-23 06:53:05.665650137 +0000 UTC m=+1185.469416790 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift") pod "swift-storage-0" (UID: "e96a6620-5e97-4f3b-95b3-52c8b3161098") : configmap "swift-ring-files" not found Jan 23 06:53:02 crc kubenswrapper[4937]: I0123 06:53:02.927450 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 23 06:53:02 crc kubenswrapper[4937]: I0123 06:53:02.927781 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 23 06:53:03 crc kubenswrapper[4937]: I0123 06:53:03.029105 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 23 06:53:03 crc kubenswrapper[4937]: I0123 06:53:03.999164 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.338649 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.338722 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.471634 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2f8d-account-create-update-4bzd2"] Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.473010 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f8d-account-create-update-4bzd2" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.495223 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.504187 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-wcq4t"] Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.505303 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wcq4t" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.523158 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2f8d-account-create-update-4bzd2"] Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.529234 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wcq4t"] Jan 23 06:53:04 crc kubenswrapper[4937]: E0123 06:53:04.565112 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="75b79f91-7f35-4e37-9fd8-2ada0ad723df" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.624466 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcc194a9-e4ca-43fa-a46c-da9090073151-operator-scripts\") pod \"keystone-2f8d-account-create-update-4bzd2\" (UID: \"bcc194a9-e4ca-43fa-a46c-da9090073151\") " pod="openstack/keystone-2f8d-account-create-update-4bzd2" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.624546 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjg2h\" (UniqueName: \"kubernetes.io/projected/f45e07a8-7bf3-4d38-9ef2-169a76dcf129-kube-api-access-qjg2h\") pod \"keystone-db-create-wcq4t\" (UID: \"f45e07a8-7bf3-4d38-9ef2-169a76dcf129\") " pod="openstack/keystone-db-create-wcq4t" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.624660 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-698dd\" (UniqueName: \"kubernetes.io/projected/bcc194a9-e4ca-43fa-a46c-da9090073151-kube-api-access-698dd\") pod \"keystone-2f8d-account-create-update-4bzd2\" (UID: \"bcc194a9-e4ca-43fa-a46c-da9090073151\") " pod="openstack/keystone-2f8d-account-create-update-4bzd2" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.624680 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45e07a8-7bf3-4d38-9ef2-169a76dcf129-operator-scripts\") pod \"keystone-db-create-wcq4t\" (UID: \"f45e07a8-7bf3-4d38-9ef2-169a76dcf129\") " pod="openstack/keystone-db-create-wcq4t" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.659271 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.726218 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjg2h\" (UniqueName: \"kubernetes.io/projected/f45e07a8-7bf3-4d38-9ef2-169a76dcf129-kube-api-access-qjg2h\") pod \"keystone-db-create-wcq4t\" (UID: \"f45e07a8-7bf3-4d38-9ef2-169a76dcf129\") " pod="openstack/keystone-db-create-wcq4t" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.726343 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-698dd\" (UniqueName: \"kubernetes.io/projected/bcc194a9-e4ca-43fa-a46c-da9090073151-kube-api-access-698dd\") pod \"keystone-2f8d-account-create-update-4bzd2\" (UID: \"bcc194a9-e4ca-43fa-a46c-da9090073151\") " pod="openstack/keystone-2f8d-account-create-update-4bzd2" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.726364 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45e07a8-7bf3-4d38-9ef2-169a76dcf129-operator-scripts\") pod \"keystone-db-create-wcq4t\" (UID: \"f45e07a8-7bf3-4d38-9ef2-169a76dcf129\") " pod="openstack/keystone-db-create-wcq4t" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.726409 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcc194a9-e4ca-43fa-a46c-da9090073151-operator-scripts\") pod \"keystone-2f8d-account-create-update-4bzd2\" (UID: \"bcc194a9-e4ca-43fa-a46c-da9090073151\") " pod="openstack/keystone-2f8d-account-create-update-4bzd2" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.727149 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcc194a9-e4ca-43fa-a46c-da9090073151-operator-scripts\") pod \"keystone-2f8d-account-create-update-4bzd2\" (UID: \"bcc194a9-e4ca-43fa-a46c-da9090073151\") " pod="openstack/keystone-2f8d-account-create-update-4bzd2" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.727956 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45e07a8-7bf3-4d38-9ef2-169a76dcf129-operator-scripts\") pod \"keystone-db-create-wcq4t\" (UID: \"f45e07a8-7bf3-4d38-9ef2-169a76dcf129\") " pod="openstack/keystone-db-create-wcq4t" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.747977 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-698dd\" (UniqueName: \"kubernetes.io/projected/bcc194a9-e4ca-43fa-a46c-da9090073151-kube-api-access-698dd\") pod \"keystone-2f8d-account-create-update-4bzd2\" (UID: \"bcc194a9-e4ca-43fa-a46c-da9090073151\") " pod="openstack/keystone-2f8d-account-create-update-4bzd2" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.753391 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjg2h\" (UniqueName: \"kubernetes.io/projected/f45e07a8-7bf3-4d38-9ef2-169a76dcf129-kube-api-access-qjg2h\") pod \"keystone-db-create-wcq4t\" (UID: \"f45e07a8-7bf3-4d38-9ef2-169a76dcf129\") " pod="openstack/keystone-db-create-wcq4t" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.838009 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f8d-account-create-update-4bzd2" Jan 23 06:53:04 crc kubenswrapper[4937]: I0123 06:53:04.881619 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wcq4t" Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.012516 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-gc5jw"] Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.013735 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gc5jw" Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.024577 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gc5jw"] Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.050574 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.133577 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4drl2\" (UniqueName: \"kubernetes.io/projected/ae36acc6-0eaa-434a-bc7c-067729b8b888-kube-api-access-4drl2\") pod \"placement-db-create-gc5jw\" (UID: \"ae36acc6-0eaa-434a-bc7c-067729b8b888\") " pod="openstack/placement-db-create-gc5jw" Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.133679 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae36acc6-0eaa-434a-bc7c-067729b8b888-operator-scripts\") pod \"placement-db-create-gc5jw\" (UID: \"ae36acc6-0eaa-434a-bc7c-067729b8b888\") " pod="openstack/placement-db-create-gc5jw" Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.138260 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3b31-account-create-update-4tdq9"] Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.141065 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3b31-account-create-update-4tdq9" Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.144985 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.147646 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3b31-account-create-update-4tdq9"] Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.235866 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27668306-a075-4ea7-a06c-05b81239ec08-operator-scripts\") pod \"placement-3b31-account-create-update-4tdq9\" (UID: \"27668306-a075-4ea7-a06c-05b81239ec08\") " pod="openstack/placement-3b31-account-create-update-4tdq9" Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.235963 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2gn8\" (UniqueName: \"kubernetes.io/projected/27668306-a075-4ea7-a06c-05b81239ec08-kube-api-access-w2gn8\") pod \"placement-3b31-account-create-update-4tdq9\" (UID: \"27668306-a075-4ea7-a06c-05b81239ec08\") " pod="openstack/placement-3b31-account-create-update-4tdq9" Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.236003 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4drl2\" (UniqueName: \"kubernetes.io/projected/ae36acc6-0eaa-434a-bc7c-067729b8b888-kube-api-access-4drl2\") pod \"placement-db-create-gc5jw\" (UID: \"ae36acc6-0eaa-434a-bc7c-067729b8b888\") " pod="openstack/placement-db-create-gc5jw" Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.236055 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae36acc6-0eaa-434a-bc7c-067729b8b888-operator-scripts\") pod \"placement-db-create-gc5jw\" (UID: \"ae36acc6-0eaa-434a-bc7c-067729b8b888\") " pod="openstack/placement-db-create-gc5jw" Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.240320 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae36acc6-0eaa-434a-bc7c-067729b8b888-operator-scripts\") pod \"placement-db-create-gc5jw\" (UID: \"ae36acc6-0eaa-434a-bc7c-067729b8b888\") " pod="openstack/placement-db-create-gc5jw" Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.254480 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4drl2\" (UniqueName: \"kubernetes.io/projected/ae36acc6-0eaa-434a-bc7c-067729b8b888-kube-api-access-4drl2\") pod \"placement-db-create-gc5jw\" (UID: \"ae36acc6-0eaa-434a-bc7c-067729b8b888\") " pod="openstack/placement-db-create-gc5jw" Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.338299 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27668306-a075-4ea7-a06c-05b81239ec08-operator-scripts\") pod \"placement-3b31-account-create-update-4tdq9\" (UID: \"27668306-a075-4ea7-a06c-05b81239ec08\") " pod="openstack/placement-3b31-account-create-update-4tdq9" Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.338434 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2gn8\" (UniqueName: \"kubernetes.io/projected/27668306-a075-4ea7-a06c-05b81239ec08-kube-api-access-w2gn8\") pod \"placement-3b31-account-create-update-4tdq9\" (UID: \"27668306-a075-4ea7-a06c-05b81239ec08\") " pod="openstack/placement-3b31-account-create-update-4tdq9" Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.338719 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gc5jw" Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.340539 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27668306-a075-4ea7-a06c-05b81239ec08-operator-scripts\") pod \"placement-3b31-account-create-update-4tdq9\" (UID: \"27668306-a075-4ea7-a06c-05b81239ec08\") " pod="openstack/placement-3b31-account-create-update-4tdq9" Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.365070 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2gn8\" (UniqueName: \"kubernetes.io/projected/27668306-a075-4ea7-a06c-05b81239ec08-kube-api-access-w2gn8\") pod \"placement-3b31-account-create-update-4tdq9\" (UID: \"27668306-a075-4ea7-a06c-05b81239ec08\") " pod="openstack/placement-3b31-account-create-update-4tdq9" Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.527654 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3b31-account-create-update-4tdq9" Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.743167 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:53:05 crc kubenswrapper[4937]: E0123 06:53:05.743390 4937 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 06:53:05 crc kubenswrapper[4937]: E0123 06:53:05.743405 4937 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 06:53:05 crc kubenswrapper[4937]: E0123 06:53:05.743448 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift podName:e96a6620-5e97-4f3b-95b3-52c8b3161098 nodeName:}" failed. No retries permitted until 2026-01-23 06:53:13.743434853 +0000 UTC m=+1193.547201506 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift") pod "swift-storage-0" (UID: "e96a6620-5e97-4f3b-95b3-52c8b3161098") : configmap "swift-ring-files" not found Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.811374 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2f8d-account-create-update-4bzd2"] Jan 23 06:53:05 crc kubenswrapper[4937]: W0123 06:53:05.819260 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcc194a9_e4ca_43fa_a46c_da9090073151.slice/crio-c259d3fe71225293ece8a82db9ca2bcc32949227ada2fe05050746ac015e0210 WatchSource:0}: Error finding container c259d3fe71225293ece8a82db9ca2bcc32949227ada2fe05050746ac015e0210: Status 404 returned error can't find the container with id c259d3fe71225293ece8a82db9ca2bcc32949227ada2fe05050746ac015e0210 Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.830330 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wcq4t"] Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.922703 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2f8d-account-create-update-4bzd2" event={"ID":"bcc194a9-e4ca-43fa-a46c-da9090073151","Type":"ContainerStarted","Data":"c259d3fe71225293ece8a82db9ca2bcc32949227ada2fe05050746ac015e0210"} Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.924730 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"32a4f804-e737-4bf8-b092-b20127604273","Type":"ContainerStarted","Data":"4c708104c166802c9cecdab7bd1b2cc9a2b4aef3d884ca4fadbb72584caf0935"} Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.925653 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wcq4t" event={"ID":"f45e07a8-7bf3-4d38-9ef2-169a76dcf129","Type":"ContainerStarted","Data":"fd026a61c0b4255fa281555d4ea7b5ac0912c6af5f1bfa6cfcedd81b57ce0143"} Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.926777 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dq8lz" event={"ID":"b9972df0-0d7d-4346-a77c-546a458a1677","Type":"ContainerStarted","Data":"f2f8d4ea5d2698121e047b30aaf9ae075803a4c0fa9032f85d619304329c783d"} Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.945051 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-dq8lz" podStartSLOduration=5.247625646 podStartE2EDuration="7.94500893s" podCreationTimestamp="2026-01-23 06:52:58 +0000 UTC" firstStartedPulling="2026-01-23 06:52:59.296867363 +0000 UTC m=+1179.100634016" lastFinishedPulling="2026-01-23 06:53:01.994250647 +0000 UTC m=+1181.798017300" observedRunningTime="2026-01-23 06:53:05.944800074 +0000 UTC m=+1185.748566747" watchObservedRunningTime="2026-01-23 06:53:05.94500893 +0000 UTC m=+1185.748775583" Jan 23 06:53:05 crc kubenswrapper[4937]: W0123 06:53:05.969983 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae36acc6_0eaa_434a_bc7c_067729b8b888.slice/crio-6866a20f517b76c796caeee5c1f931af88b169943294acb302fcca9bfa5fa8d0 WatchSource:0}: Error finding container 6866a20f517b76c796caeee5c1f931af88b169943294acb302fcca9bfa5fa8d0: Status 404 returned error can't find the container with id 6866a20f517b76c796caeee5c1f931af88b169943294acb302fcca9bfa5fa8d0 Jan 23 06:53:05 crc kubenswrapper[4937]: I0123 06:53:05.970706 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gc5jw"] Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.072664 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3b31-account-create-update-4tdq9"] Jan 23 06:53:06 crc kubenswrapper[4937]: W0123 06:53:06.076793 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27668306_a075_4ea7_a06c_05b81239ec08.slice/crio-7c635e6d40943bed9a39a8faa10e32a62c32848ecb2a1b307752a224242a7d45 WatchSource:0}: Error finding container 7c635e6d40943bed9a39a8faa10e32a62c32848ecb2a1b307752a224242a7d45: Status 404 returned error can't find the container with id 7c635e6d40943bed9a39a8faa10e32a62c32848ecb2a1b307752a224242a7d45 Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.747488 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-nnvvn"] Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.748819 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-nnvvn" Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.755425 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-nnvvn"] Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.866313 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a9dea0e-3304-42ee-92b2-c7d39db20bba-operator-scripts\") pod \"watcher-db-create-nnvvn\" (UID: \"5a9dea0e-3304-42ee-92b2-c7d39db20bba\") " pod="openstack/watcher-db-create-nnvvn" Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.866742 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98wmh\" (UniqueName: \"kubernetes.io/projected/5a9dea0e-3304-42ee-92b2-c7d39db20bba-kube-api-access-98wmh\") pod \"watcher-db-create-nnvvn\" (UID: \"5a9dea0e-3304-42ee-92b2-c7d39db20bba\") " pod="openstack/watcher-db-create-nnvvn" Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.870435 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-54b1-account-create-update-8xcgx"] Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.873948 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-54b1-account-create-update-8xcgx" Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.876194 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.879723 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-54b1-account-create-update-8xcgx"] Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.933878 4937 generic.go:334] "Generic (PLEG): container finished" podID="ae36acc6-0eaa-434a-bc7c-067729b8b888" containerID="2950a37cdedee3935d72518cb33c511d33eb3a15c4b54de346f021b84fada86d" exitCode=0 Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.933964 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gc5jw" event={"ID":"ae36acc6-0eaa-434a-bc7c-067729b8b888","Type":"ContainerDied","Data":"2950a37cdedee3935d72518cb33c511d33eb3a15c4b54de346f021b84fada86d"} Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.934012 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gc5jw" event={"ID":"ae36acc6-0eaa-434a-bc7c-067729b8b888","Type":"ContainerStarted","Data":"6866a20f517b76c796caeee5c1f931af88b169943294acb302fcca9bfa5fa8d0"} Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.935615 4937 generic.go:334] "Generic (PLEG): container finished" podID="f45e07a8-7bf3-4d38-9ef2-169a76dcf129" containerID="099473046a862d14ff6fcdd759485d110ba6dc14ce46d611f97221058cf45285" exitCode=0 Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.935689 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wcq4t" event={"ID":"f45e07a8-7bf3-4d38-9ef2-169a76dcf129","Type":"ContainerDied","Data":"099473046a862d14ff6fcdd759485d110ba6dc14ce46d611f97221058cf45285"} Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.940068 4937 generic.go:334] "Generic (PLEG): container finished" podID="bcc194a9-e4ca-43fa-a46c-da9090073151" containerID="bd320fa4a8d387736df2cf963e92cb08ebbc179cac4ba082b8d622ac9291ef11" exitCode=0 Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.940135 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2f8d-account-create-update-4bzd2" event={"ID":"bcc194a9-e4ca-43fa-a46c-da9090073151","Type":"ContainerDied","Data":"bd320fa4a8d387736df2cf963e92cb08ebbc179cac4ba082b8d622ac9291ef11"} Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.941951 4937 generic.go:334] "Generic (PLEG): container finished" podID="27668306-a075-4ea7-a06c-05b81239ec08" containerID="b996e6a162c54accf3017376c131f8824f7cec146dc381fc8506b7c985de1eb2" exitCode=0 Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.942064 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3b31-account-create-update-4tdq9" event={"ID":"27668306-a075-4ea7-a06c-05b81239ec08","Type":"ContainerDied","Data":"b996e6a162c54accf3017376c131f8824f7cec146dc381fc8506b7c985de1eb2"} Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.942111 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3b31-account-create-update-4tdq9" event={"ID":"27668306-a075-4ea7-a06c-05b81239ec08","Type":"ContainerStarted","Data":"7c635e6d40943bed9a39a8faa10e32a62c32848ecb2a1b307752a224242a7d45"} Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.968393 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc26b8a1-b928-42ae-b7fd-385e4cce725c-operator-scripts\") pod \"watcher-54b1-account-create-update-8xcgx\" (UID: \"fc26b8a1-b928-42ae-b7fd-385e4cce725c\") " pod="openstack/watcher-54b1-account-create-update-8xcgx" Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.968459 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlcwn\" (UniqueName: \"kubernetes.io/projected/fc26b8a1-b928-42ae-b7fd-385e4cce725c-kube-api-access-xlcwn\") pod \"watcher-54b1-account-create-update-8xcgx\" (UID: \"fc26b8a1-b928-42ae-b7fd-385e4cce725c\") " pod="openstack/watcher-54b1-account-create-update-8xcgx" Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.968761 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a9dea0e-3304-42ee-92b2-c7d39db20bba-operator-scripts\") pod \"watcher-db-create-nnvvn\" (UID: \"5a9dea0e-3304-42ee-92b2-c7d39db20bba\") " pod="openstack/watcher-db-create-nnvvn" Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.968839 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98wmh\" (UniqueName: \"kubernetes.io/projected/5a9dea0e-3304-42ee-92b2-c7d39db20bba-kube-api-access-98wmh\") pod \"watcher-db-create-nnvvn\" (UID: \"5a9dea0e-3304-42ee-92b2-c7d39db20bba\") " pod="openstack/watcher-db-create-nnvvn" Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.970018 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a9dea0e-3304-42ee-92b2-c7d39db20bba-operator-scripts\") pod \"watcher-db-create-nnvvn\" (UID: \"5a9dea0e-3304-42ee-92b2-c7d39db20bba\") " pod="openstack/watcher-db-create-nnvvn" Jan 23 06:53:06 crc kubenswrapper[4937]: I0123 06:53:06.988426 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98wmh\" (UniqueName: \"kubernetes.io/projected/5a9dea0e-3304-42ee-92b2-c7d39db20bba-kube-api-access-98wmh\") pod \"watcher-db-create-nnvvn\" (UID: \"5a9dea0e-3304-42ee-92b2-c7d39db20bba\") " pod="openstack/watcher-db-create-nnvvn" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.019781 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.068696 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bcd8bcc77-cjvkj"] Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.068978 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" podUID="3691b989-e659-420b-adad-bbf7ac2406cc" containerName="dnsmasq-dns" containerID="cri-o://f218060b65c3e980462e883b1cb40ee8cd1ceecc6574a961439567a0e8c103b1" gracePeriod=10 Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.070427 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc26b8a1-b928-42ae-b7fd-385e4cce725c-operator-scripts\") pod \"watcher-54b1-account-create-update-8xcgx\" (UID: \"fc26b8a1-b928-42ae-b7fd-385e4cce725c\") " pod="openstack/watcher-54b1-account-create-update-8xcgx" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.070474 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlcwn\" (UniqueName: \"kubernetes.io/projected/fc26b8a1-b928-42ae-b7fd-385e4cce725c-kube-api-access-xlcwn\") pod \"watcher-54b1-account-create-update-8xcgx\" (UID: \"fc26b8a1-b928-42ae-b7fd-385e4cce725c\") " pod="openstack/watcher-54b1-account-create-update-8xcgx" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.073043 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc26b8a1-b928-42ae-b7fd-385e4cce725c-operator-scripts\") pod \"watcher-54b1-account-create-update-8xcgx\" (UID: \"fc26b8a1-b928-42ae-b7fd-385e4cce725c\") " pod="openstack/watcher-54b1-account-create-update-8xcgx" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.102534 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-nnvvn" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.168129 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlcwn\" (UniqueName: \"kubernetes.io/projected/fc26b8a1-b928-42ae-b7fd-385e4cce725c-kube-api-access-xlcwn\") pod \"watcher-54b1-account-create-update-8xcgx\" (UID: \"fc26b8a1-b928-42ae-b7fd-385e4cce725c\") " pod="openstack/watcher-54b1-account-create-update-8xcgx" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.202509 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-54b1-account-create-update-8xcgx" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.577478 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.582121 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.680090 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq7bs\" (UniqueName: \"kubernetes.io/projected/3691b989-e659-420b-adad-bbf7ac2406cc-kube-api-access-cq7bs\") pod \"3691b989-e659-420b-adad-bbf7ac2406cc\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.680229 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-ovsdbserver-sb\") pod \"3691b989-e659-420b-adad-bbf7ac2406cc\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.680305 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-ovsdbserver-nb\") pod \"3691b989-e659-420b-adad-bbf7ac2406cc\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.680458 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-dns-svc\") pod \"3691b989-e659-420b-adad-bbf7ac2406cc\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.680543 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-config\") pod \"3691b989-e659-420b-adad-bbf7ac2406cc\" (UID: \"3691b989-e659-420b-adad-bbf7ac2406cc\") " Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.734637 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-54b1-account-create-update-8xcgx"] Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.745523 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-nnvvn"] Jan 23 06:53:07 crc kubenswrapper[4937]: W0123 06:53:07.863270 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc26b8a1_b928_42ae_b7fd_385e4cce725c.slice/crio-f3f0999f64a4926d4392e20a2e3f1b7df9048665449c5af9c9442262ecd755c3 WatchSource:0}: Error finding container f3f0999f64a4926d4392e20a2e3f1b7df9048665449c5af9c9442262ecd755c3: Status 404 returned error can't find the container with id f3f0999f64a4926d4392e20a2e3f1b7df9048665449c5af9c9442262ecd755c3 Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.864518 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3691b989-e659-420b-adad-bbf7ac2406cc-kube-api-access-cq7bs" (OuterVolumeSpecName: "kube-api-access-cq7bs") pod "3691b989-e659-420b-adad-bbf7ac2406cc" (UID: "3691b989-e659-420b-adad-bbf7ac2406cc"). InnerVolumeSpecName "kube-api-access-cq7bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.884138 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq7bs\" (UniqueName: \"kubernetes.io/projected/3691b989-e659-420b-adad-bbf7ac2406cc-kube-api-access-cq7bs\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.918263 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3691b989-e659-420b-adad-bbf7ac2406cc" (UID: "3691b989-e659-420b-adad-bbf7ac2406cc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.920703 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3691b989-e659-420b-adad-bbf7ac2406cc" (UID: "3691b989-e659-420b-adad-bbf7ac2406cc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.921829 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-config" (OuterVolumeSpecName: "config") pod "3691b989-e659-420b-adad-bbf7ac2406cc" (UID: "3691b989-e659-420b-adad-bbf7ac2406cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.935337 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3691b989-e659-420b-adad-bbf7ac2406cc" (UID: "3691b989-e659-420b-adad-bbf7ac2406cc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.973552 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-nnvvn" event={"ID":"5a9dea0e-3304-42ee-92b2-c7d39db20bba","Type":"ContainerStarted","Data":"b78960dcd016e0908c599b285204d653eef222f59f0b9303721ca61faa7e6408"} Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.975923 4937 generic.go:334] "Generic (PLEG): container finished" podID="3691b989-e659-420b-adad-bbf7ac2406cc" containerID="f218060b65c3e980462e883b1cb40ee8cd1ceecc6574a961439567a0e8c103b1" exitCode=0 Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.975984 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" event={"ID":"3691b989-e659-420b-adad-bbf7ac2406cc","Type":"ContainerDied","Data":"f218060b65c3e980462e883b1cb40ee8cd1ceecc6574a961439567a0e8c103b1"} Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.976007 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" event={"ID":"3691b989-e659-420b-adad-bbf7ac2406cc","Type":"ContainerDied","Data":"bd530326a3bd08f4ed721b8a2c7472dee472462cd5898a3dd23addbf1e8fd643"} Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.976028 4937 scope.go:117] "RemoveContainer" containerID="f218060b65c3e980462e883b1cb40ee8cd1ceecc6574a961439567a0e8c103b1" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.976167 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bcd8bcc77-cjvkj" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.983952 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-54b1-account-create-update-8xcgx" event={"ID":"fc26b8a1-b928-42ae-b7fd-385e4cce725c","Type":"ContainerStarted","Data":"f3f0999f64a4926d4392e20a2e3f1b7df9048665449c5af9c9442262ecd755c3"} Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.987524 4937 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.987547 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.987556 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:07 crc kubenswrapper[4937]: I0123 06:53:07.987564 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3691b989-e659-420b-adad-bbf7ac2406cc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.003512 4937 scope.go:117] "RemoveContainer" containerID="e142c41f7445924230e86d3ab498e65f730c3bcf77029784e20d94ad84acdacf" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.035852 4937 scope.go:117] "RemoveContainer" containerID="f218060b65c3e980462e883b1cb40ee8cd1ceecc6574a961439567a0e8c103b1" Jan 23 06:53:08 crc kubenswrapper[4937]: E0123 06:53:08.036501 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f218060b65c3e980462e883b1cb40ee8cd1ceecc6574a961439567a0e8c103b1\": container with ID starting with f218060b65c3e980462e883b1cb40ee8cd1ceecc6574a961439567a0e8c103b1 not found: ID does not exist" containerID="f218060b65c3e980462e883b1cb40ee8cd1ceecc6574a961439567a0e8c103b1" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.036545 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f218060b65c3e980462e883b1cb40ee8cd1ceecc6574a961439567a0e8c103b1"} err="failed to get container status \"f218060b65c3e980462e883b1cb40ee8cd1ceecc6574a961439567a0e8c103b1\": rpc error: code = NotFound desc = could not find container \"f218060b65c3e980462e883b1cb40ee8cd1ceecc6574a961439567a0e8c103b1\": container with ID starting with f218060b65c3e980462e883b1cb40ee8cd1ceecc6574a961439567a0e8c103b1 not found: ID does not exist" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.036575 4937 scope.go:117] "RemoveContainer" containerID="e142c41f7445924230e86d3ab498e65f730c3bcf77029784e20d94ad84acdacf" Jan 23 06:53:08 crc kubenswrapper[4937]: E0123 06:53:08.036964 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e142c41f7445924230e86d3ab498e65f730c3bcf77029784e20d94ad84acdacf\": container with ID starting with e142c41f7445924230e86d3ab498e65f730c3bcf77029784e20d94ad84acdacf not found: ID does not exist" containerID="e142c41f7445924230e86d3ab498e65f730c3bcf77029784e20d94ad84acdacf" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.036989 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e142c41f7445924230e86d3ab498e65f730c3bcf77029784e20d94ad84acdacf"} err="failed to get container status \"e142c41f7445924230e86d3ab498e65f730c3bcf77029784e20d94ad84acdacf\": rpc error: code = NotFound desc = could not find container \"e142c41f7445924230e86d3ab498e65f730c3bcf77029784e20d94ad84acdacf\": container with ID starting with e142c41f7445924230e86d3ab498e65f730c3bcf77029784e20d94ad84acdacf not found: ID does not exist" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.090324 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bcd8bcc77-cjvkj"] Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.110733 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bcd8bcc77-cjvkj"] Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.370923 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gc5jw" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.484568 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f8d-account-create-update-4bzd2" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.495858 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4drl2\" (UniqueName: \"kubernetes.io/projected/ae36acc6-0eaa-434a-bc7c-067729b8b888-kube-api-access-4drl2\") pod \"ae36acc6-0eaa-434a-bc7c-067729b8b888\" (UID: \"ae36acc6-0eaa-434a-bc7c-067729b8b888\") " Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.495963 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae36acc6-0eaa-434a-bc7c-067729b8b888-operator-scripts\") pod \"ae36acc6-0eaa-434a-bc7c-067729b8b888\" (UID: \"ae36acc6-0eaa-434a-bc7c-067729b8b888\") " Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.497107 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae36acc6-0eaa-434a-bc7c-067729b8b888-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ae36acc6-0eaa-434a-bc7c-067729b8b888" (UID: "ae36acc6-0eaa-434a-bc7c-067729b8b888"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.503930 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae36acc6-0eaa-434a-bc7c-067729b8b888-kube-api-access-4drl2" (OuterVolumeSpecName: "kube-api-access-4drl2") pod "ae36acc6-0eaa-434a-bc7c-067729b8b888" (UID: "ae36acc6-0eaa-434a-bc7c-067729b8b888"). InnerVolumeSpecName "kube-api-access-4drl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.536820 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3691b989-e659-420b-adad-bbf7ac2406cc" path="/var/lib/kubelet/pods/3691b989-e659-420b-adad-bbf7ac2406cc/volumes" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.551302 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wcq4t" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.563776 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3b31-account-create-update-4tdq9" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.597755 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27668306-a075-4ea7-a06c-05b81239ec08-operator-scripts\") pod \"27668306-a075-4ea7-a06c-05b81239ec08\" (UID: \"27668306-a075-4ea7-a06c-05b81239ec08\") " Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.597850 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45e07a8-7bf3-4d38-9ef2-169a76dcf129-operator-scripts\") pod \"f45e07a8-7bf3-4d38-9ef2-169a76dcf129\" (UID: \"f45e07a8-7bf3-4d38-9ef2-169a76dcf129\") " Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.597886 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-698dd\" (UniqueName: \"kubernetes.io/projected/bcc194a9-e4ca-43fa-a46c-da9090073151-kube-api-access-698dd\") pod \"bcc194a9-e4ca-43fa-a46c-da9090073151\" (UID: \"bcc194a9-e4ca-43fa-a46c-da9090073151\") " Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.598175 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjg2h\" (UniqueName: \"kubernetes.io/projected/f45e07a8-7bf3-4d38-9ef2-169a76dcf129-kube-api-access-qjg2h\") pod \"f45e07a8-7bf3-4d38-9ef2-169a76dcf129\" (UID: \"f45e07a8-7bf3-4d38-9ef2-169a76dcf129\") " Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.598665 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27668306-a075-4ea7-a06c-05b81239ec08-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27668306-a075-4ea7-a06c-05b81239ec08" (UID: "27668306-a075-4ea7-a06c-05b81239ec08"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.598668 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f45e07a8-7bf3-4d38-9ef2-169a76dcf129-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f45e07a8-7bf3-4d38-9ef2-169a76dcf129" (UID: "f45e07a8-7bf3-4d38-9ef2-169a76dcf129"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.598778 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2gn8\" (UniqueName: \"kubernetes.io/projected/27668306-a075-4ea7-a06c-05b81239ec08-kube-api-access-w2gn8\") pod \"27668306-a075-4ea7-a06c-05b81239ec08\" (UID: \"27668306-a075-4ea7-a06c-05b81239ec08\") " Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.598830 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcc194a9-e4ca-43fa-a46c-da9090073151-operator-scripts\") pod \"bcc194a9-e4ca-43fa-a46c-da9090073151\" (UID: \"bcc194a9-e4ca-43fa-a46c-da9090073151\") " Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.599445 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae36acc6-0eaa-434a-bc7c-067729b8b888-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.599471 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27668306-a075-4ea7-a06c-05b81239ec08-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.599480 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45e07a8-7bf3-4d38-9ef2-169a76dcf129-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.599488 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4drl2\" (UniqueName: \"kubernetes.io/projected/ae36acc6-0eaa-434a-bc7c-067729b8b888-kube-api-access-4drl2\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.600342 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcc194a9-e4ca-43fa-a46c-da9090073151-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcc194a9-e4ca-43fa-a46c-da9090073151" (UID: "bcc194a9-e4ca-43fa-a46c-da9090073151"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.601450 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f45e07a8-7bf3-4d38-9ef2-169a76dcf129-kube-api-access-qjg2h" (OuterVolumeSpecName: "kube-api-access-qjg2h") pod "f45e07a8-7bf3-4d38-9ef2-169a76dcf129" (UID: "f45e07a8-7bf3-4d38-9ef2-169a76dcf129"). InnerVolumeSpecName "kube-api-access-qjg2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.602423 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcc194a9-e4ca-43fa-a46c-da9090073151-kube-api-access-698dd" (OuterVolumeSpecName: "kube-api-access-698dd") pod "bcc194a9-e4ca-43fa-a46c-da9090073151" (UID: "bcc194a9-e4ca-43fa-a46c-da9090073151"). InnerVolumeSpecName "kube-api-access-698dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.602916 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27668306-a075-4ea7-a06c-05b81239ec08-kube-api-access-w2gn8" (OuterVolumeSpecName: "kube-api-access-w2gn8") pod "27668306-a075-4ea7-a06c-05b81239ec08" (UID: "27668306-a075-4ea7-a06c-05b81239ec08"). InnerVolumeSpecName "kube-api-access-w2gn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.701476 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjg2h\" (UniqueName: \"kubernetes.io/projected/f45e07a8-7bf3-4d38-9ef2-169a76dcf129-kube-api-access-qjg2h\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.701781 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2gn8\" (UniqueName: \"kubernetes.io/projected/27668306-a075-4ea7-a06c-05b81239ec08-kube-api-access-w2gn8\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.701793 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcc194a9-e4ca-43fa-a46c-da9090073151-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:08 crc kubenswrapper[4937]: I0123 06:53:08.701802 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-698dd\" (UniqueName: \"kubernetes.io/projected/bcc194a9-e4ca-43fa-a46c-da9090073151-kube-api-access-698dd\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:09 crc kubenswrapper[4937]: I0123 06:53:09.004792 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"32a4f804-e737-4bf8-b092-b20127604273","Type":"ContainerStarted","Data":"c5ec3c5460b6b5574ecf121c7ceb7dcfbb0146f2ad6bedfaf4bd795d46f99862"} Jan 23 06:53:09 crc kubenswrapper[4937]: I0123 06:53:09.006913 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gc5jw" Jan 23 06:53:09 crc kubenswrapper[4937]: I0123 06:53:09.006933 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gc5jw" event={"ID":"ae36acc6-0eaa-434a-bc7c-067729b8b888","Type":"ContainerDied","Data":"6866a20f517b76c796caeee5c1f931af88b169943294acb302fcca9bfa5fa8d0"} Jan 23 06:53:09 crc kubenswrapper[4937]: I0123 06:53:09.007169 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6866a20f517b76c796caeee5c1f931af88b169943294acb302fcca9bfa5fa8d0" Jan 23 06:53:09 crc kubenswrapper[4937]: I0123 06:53:09.008301 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wcq4t" event={"ID":"f45e07a8-7bf3-4d38-9ef2-169a76dcf129","Type":"ContainerDied","Data":"fd026a61c0b4255fa281555d4ea7b5ac0912c6af5f1bfa6cfcedd81b57ce0143"} Jan 23 06:53:09 crc kubenswrapper[4937]: I0123 06:53:09.008327 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wcq4t" Jan 23 06:53:09 crc kubenswrapper[4937]: I0123 06:53:09.008350 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd026a61c0b4255fa281555d4ea7b5ac0912c6af5f1bfa6cfcedd81b57ce0143" Jan 23 06:53:09 crc kubenswrapper[4937]: I0123 06:53:09.009536 4937 generic.go:334] "Generic (PLEG): container finished" podID="fc26b8a1-b928-42ae-b7fd-385e4cce725c" containerID="14b704ce5d3d0358f39246949cfa3e8fae8902aa712e1431939928a447e73302" exitCode=0 Jan 23 06:53:09 crc kubenswrapper[4937]: I0123 06:53:09.009579 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-54b1-account-create-update-8xcgx" event={"ID":"fc26b8a1-b928-42ae-b7fd-385e4cce725c","Type":"ContainerDied","Data":"14b704ce5d3d0358f39246949cfa3e8fae8902aa712e1431939928a447e73302"} Jan 23 06:53:09 crc kubenswrapper[4937]: I0123 06:53:09.015442 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2f8d-account-create-update-4bzd2" event={"ID":"bcc194a9-e4ca-43fa-a46c-da9090073151","Type":"ContainerDied","Data":"c259d3fe71225293ece8a82db9ca2bcc32949227ada2fe05050746ac015e0210"} Jan 23 06:53:09 crc kubenswrapper[4937]: I0123 06:53:09.015480 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c259d3fe71225293ece8a82db9ca2bcc32949227ada2fe05050746ac015e0210" Jan 23 06:53:09 crc kubenswrapper[4937]: I0123 06:53:09.015508 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f8d-account-create-update-4bzd2" Jan 23 06:53:09 crc kubenswrapper[4937]: I0123 06:53:09.017689 4937 generic.go:334] "Generic (PLEG): container finished" podID="5a9dea0e-3304-42ee-92b2-c7d39db20bba" containerID="0adc1cb4667efb72fd811085288b51db0cbf5374bd6eb426f80b143b0e3072ae" exitCode=0 Jan 23 06:53:09 crc kubenswrapper[4937]: I0123 06:53:09.017749 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-nnvvn" event={"ID":"5a9dea0e-3304-42ee-92b2-c7d39db20bba","Type":"ContainerDied","Data":"0adc1cb4667efb72fd811085288b51db0cbf5374bd6eb426f80b143b0e3072ae"} Jan 23 06:53:09 crc kubenswrapper[4937]: I0123 06:53:09.020454 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3b31-account-create-update-4tdq9" event={"ID":"27668306-a075-4ea7-a06c-05b81239ec08","Type":"ContainerDied","Data":"7c635e6d40943bed9a39a8faa10e32a62c32848ecb2a1b307752a224242a7d45"} Jan 23 06:53:09 crc kubenswrapper[4937]: I0123 06:53:09.020506 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c635e6d40943bed9a39a8faa10e32a62c32848ecb2a1b307752a224242a7d45" Jan 23 06:53:09 crc kubenswrapper[4937]: I0123 06:53:09.020527 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3b31-account-create-update-4tdq9" Jan 23 06:53:10 crc kubenswrapper[4937]: I0123 06:53:10.468263 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-nnvvn" Jan 23 06:53:10 crc kubenswrapper[4937]: I0123 06:53:10.476269 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-54b1-account-create-update-8xcgx" Jan 23 06:53:10 crc kubenswrapper[4937]: I0123 06:53:10.533361 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc26b8a1-b928-42ae-b7fd-385e4cce725c-operator-scripts\") pod \"fc26b8a1-b928-42ae-b7fd-385e4cce725c\" (UID: \"fc26b8a1-b928-42ae-b7fd-385e4cce725c\") " Jan 23 06:53:10 crc kubenswrapper[4937]: I0123 06:53:10.534111 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlcwn\" (UniqueName: \"kubernetes.io/projected/fc26b8a1-b928-42ae-b7fd-385e4cce725c-kube-api-access-xlcwn\") pod \"fc26b8a1-b928-42ae-b7fd-385e4cce725c\" (UID: \"fc26b8a1-b928-42ae-b7fd-385e4cce725c\") " Jan 23 06:53:10 crc kubenswrapper[4937]: I0123 06:53:10.534153 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc26b8a1-b928-42ae-b7fd-385e4cce725c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc26b8a1-b928-42ae-b7fd-385e4cce725c" (UID: "fc26b8a1-b928-42ae-b7fd-385e4cce725c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:10 crc kubenswrapper[4937]: I0123 06:53:10.534160 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a9dea0e-3304-42ee-92b2-c7d39db20bba-operator-scripts\") pod \"5a9dea0e-3304-42ee-92b2-c7d39db20bba\" (UID: \"5a9dea0e-3304-42ee-92b2-c7d39db20bba\") " Jan 23 06:53:10 crc kubenswrapper[4937]: I0123 06:53:10.534213 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98wmh\" (UniqueName: \"kubernetes.io/projected/5a9dea0e-3304-42ee-92b2-c7d39db20bba-kube-api-access-98wmh\") pod \"5a9dea0e-3304-42ee-92b2-c7d39db20bba\" (UID: \"5a9dea0e-3304-42ee-92b2-c7d39db20bba\") " Jan 23 06:53:10 crc kubenswrapper[4937]: I0123 06:53:10.534478 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a9dea0e-3304-42ee-92b2-c7d39db20bba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a9dea0e-3304-42ee-92b2-c7d39db20bba" (UID: "5a9dea0e-3304-42ee-92b2-c7d39db20bba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:10 crc kubenswrapper[4937]: I0123 06:53:10.534957 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc26b8a1-b928-42ae-b7fd-385e4cce725c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:10 crc kubenswrapper[4937]: I0123 06:53:10.534985 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a9dea0e-3304-42ee-92b2-c7d39db20bba-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:10 crc kubenswrapper[4937]: I0123 06:53:10.539665 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc26b8a1-b928-42ae-b7fd-385e4cce725c-kube-api-access-xlcwn" (OuterVolumeSpecName: "kube-api-access-xlcwn") pod "fc26b8a1-b928-42ae-b7fd-385e4cce725c" (UID: "fc26b8a1-b928-42ae-b7fd-385e4cce725c"). InnerVolumeSpecName "kube-api-access-xlcwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:53:10 crc kubenswrapper[4937]: I0123 06:53:10.551766 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a9dea0e-3304-42ee-92b2-c7d39db20bba-kube-api-access-98wmh" (OuterVolumeSpecName: "kube-api-access-98wmh") pod "5a9dea0e-3304-42ee-92b2-c7d39db20bba" (UID: "5a9dea0e-3304-42ee-92b2-c7d39db20bba"). InnerVolumeSpecName "kube-api-access-98wmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:53:10 crc kubenswrapper[4937]: I0123 06:53:10.637066 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlcwn\" (UniqueName: \"kubernetes.io/projected/fc26b8a1-b928-42ae-b7fd-385e4cce725c-kube-api-access-xlcwn\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:10 crc kubenswrapper[4937]: I0123 06:53:10.637105 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98wmh\" (UniqueName: \"kubernetes.io/projected/5a9dea0e-3304-42ee-92b2-c7d39db20bba-kube-api-access-98wmh\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.040770 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-54b1-account-create-update-8xcgx" event={"ID":"fc26b8a1-b928-42ae-b7fd-385e4cce725c","Type":"ContainerDied","Data":"f3f0999f64a4926d4392e20a2e3f1b7df9048665449c5af9c9442262ecd755c3"} Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.040832 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3f0999f64a4926d4392e20a2e3f1b7df9048665449c5af9c9442262ecd755c3" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.040801 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-54b1-account-create-update-8xcgx" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.042807 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-nnvvn" event={"ID":"5a9dea0e-3304-42ee-92b2-c7d39db20bba","Type":"ContainerDied","Data":"b78960dcd016e0908c599b285204d653eef222f59f0b9303721ca61faa7e6408"} Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.042955 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b78960dcd016e0908c599b285204d653eef222f59f0b9303721ca61faa7e6408" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.042870 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-nnvvn" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.581683 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-f9r8z"] Jan 23 06:53:11 crc kubenswrapper[4937]: E0123 06:53:11.582354 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a9dea0e-3304-42ee-92b2-c7d39db20bba" containerName="mariadb-database-create" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.582371 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a9dea0e-3304-42ee-92b2-c7d39db20bba" containerName="mariadb-database-create" Jan 23 06:53:11 crc kubenswrapper[4937]: E0123 06:53:11.582388 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae36acc6-0eaa-434a-bc7c-067729b8b888" containerName="mariadb-database-create" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.582400 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae36acc6-0eaa-434a-bc7c-067729b8b888" containerName="mariadb-database-create" Jan 23 06:53:11 crc kubenswrapper[4937]: E0123 06:53:11.582418 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27668306-a075-4ea7-a06c-05b81239ec08" containerName="mariadb-account-create-update" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.582427 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="27668306-a075-4ea7-a06c-05b81239ec08" containerName="mariadb-account-create-update" Jan 23 06:53:11 crc kubenswrapper[4937]: E0123 06:53:11.582438 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc194a9-e4ca-43fa-a46c-da9090073151" containerName="mariadb-account-create-update" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.582446 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc194a9-e4ca-43fa-a46c-da9090073151" containerName="mariadb-account-create-update" Jan 23 06:53:11 crc kubenswrapper[4937]: E0123 06:53:11.582458 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3691b989-e659-420b-adad-bbf7ac2406cc" containerName="dnsmasq-dns" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.582465 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="3691b989-e659-420b-adad-bbf7ac2406cc" containerName="dnsmasq-dns" Jan 23 06:53:11 crc kubenswrapper[4937]: E0123 06:53:11.582481 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3691b989-e659-420b-adad-bbf7ac2406cc" containerName="init" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.582487 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="3691b989-e659-420b-adad-bbf7ac2406cc" containerName="init" Jan 23 06:53:11 crc kubenswrapper[4937]: E0123 06:53:11.582499 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45e07a8-7bf3-4d38-9ef2-169a76dcf129" containerName="mariadb-database-create" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.582506 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45e07a8-7bf3-4d38-9ef2-169a76dcf129" containerName="mariadb-database-create" Jan 23 06:53:11 crc kubenswrapper[4937]: E0123 06:53:11.582519 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc26b8a1-b928-42ae-b7fd-385e4cce725c" containerName="mariadb-account-create-update" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.582526 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc26b8a1-b928-42ae-b7fd-385e4cce725c" containerName="mariadb-account-create-update" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.582736 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae36acc6-0eaa-434a-bc7c-067729b8b888" containerName="mariadb-database-create" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.582753 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="27668306-a075-4ea7-a06c-05b81239ec08" containerName="mariadb-account-create-update" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.582766 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcc194a9-e4ca-43fa-a46c-da9090073151" containerName="mariadb-account-create-update" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.582778 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="f45e07a8-7bf3-4d38-9ef2-169a76dcf129" containerName="mariadb-database-create" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.582789 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="3691b989-e659-420b-adad-bbf7ac2406cc" containerName="dnsmasq-dns" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.582802 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a9dea0e-3304-42ee-92b2-c7d39db20bba" containerName="mariadb-database-create" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.582816 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc26b8a1-b928-42ae-b7fd-385e4cce725c" containerName="mariadb-account-create-update" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.583502 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f9r8z" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.590299 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.590955 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f9r8z"] Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.653910 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96mcr\" (UniqueName: \"kubernetes.io/projected/b72d8f3e-473a-4034-9b06-bed97271b052-kube-api-access-96mcr\") pod \"root-account-create-update-f9r8z\" (UID: \"b72d8f3e-473a-4034-9b06-bed97271b052\") " pod="openstack/root-account-create-update-f9r8z" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.654023 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72d8f3e-473a-4034-9b06-bed97271b052-operator-scripts\") pod \"root-account-create-update-f9r8z\" (UID: \"b72d8f3e-473a-4034-9b06-bed97271b052\") " pod="openstack/root-account-create-update-f9r8z" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.756532 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96mcr\" (UniqueName: \"kubernetes.io/projected/b72d8f3e-473a-4034-9b06-bed97271b052-kube-api-access-96mcr\") pod \"root-account-create-update-f9r8z\" (UID: \"b72d8f3e-473a-4034-9b06-bed97271b052\") " pod="openstack/root-account-create-update-f9r8z" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.756653 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72d8f3e-473a-4034-9b06-bed97271b052-operator-scripts\") pod \"root-account-create-update-f9r8z\" (UID: \"b72d8f3e-473a-4034-9b06-bed97271b052\") " pod="openstack/root-account-create-update-f9r8z" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.757484 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72d8f3e-473a-4034-9b06-bed97271b052-operator-scripts\") pod \"root-account-create-update-f9r8z\" (UID: \"b72d8f3e-473a-4034-9b06-bed97271b052\") " pod="openstack/root-account-create-update-f9r8z" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.778529 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96mcr\" (UniqueName: \"kubernetes.io/projected/b72d8f3e-473a-4034-9b06-bed97271b052-kube-api-access-96mcr\") pod \"root-account-create-update-f9r8z\" (UID: \"b72d8f3e-473a-4034-9b06-bed97271b052\") " pod="openstack/root-account-create-update-f9r8z" Jan 23 06:53:11 crc kubenswrapper[4937]: I0123 06:53:11.901206 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f9r8z" Jan 23 06:53:12 crc kubenswrapper[4937]: I0123 06:53:12.411134 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f9r8z"] Jan 23 06:53:13 crc kubenswrapper[4937]: I0123 06:53:13.067644 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f9r8z" event={"ID":"b72d8f3e-473a-4034-9b06-bed97271b052","Type":"ContainerStarted","Data":"29d9f5eeaec998f8919b0e221d63c9720bb89e7898e744211d0cf9c6d3a2204b"} Jan 23 06:53:13 crc kubenswrapper[4937]: I0123 06:53:13.795031 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:53:13 crc kubenswrapper[4937]: E0123 06:53:13.795370 4937 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 06:53:13 crc kubenswrapper[4937]: E0123 06:53:13.795424 4937 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 06:53:13 crc kubenswrapper[4937]: E0123 06:53:13.795517 4937 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift podName:e96a6620-5e97-4f3b-95b3-52c8b3161098 nodeName:}" failed. No retries permitted until 2026-01-23 06:53:29.795487441 +0000 UTC m=+1209.599254124 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift") pod "swift-storage-0" (UID: "e96a6620-5e97-4f3b-95b3-52c8b3161098") : configmap "swift-ring-files" not found Jan 23 06:53:15 crc kubenswrapper[4937]: I0123 06:53:15.087761 4937 generic.go:334] "Generic (PLEG): container finished" podID="b72d8f3e-473a-4034-9b06-bed97271b052" containerID="240832d2ca864f40d0598d00e9c0362e7b1f0fca7c34deae67606e10da9e0b75" exitCode=0 Jan 23 06:53:15 crc kubenswrapper[4937]: I0123 06:53:15.087829 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f9r8z" event={"ID":"b72d8f3e-473a-4034-9b06-bed97271b052","Type":"ContainerDied","Data":"240832d2ca864f40d0598d00e9c0362e7b1f0fca7c34deae67606e10da9e0b75"} Jan 23 06:53:16 crc kubenswrapper[4937]: I0123 06:53:16.456802 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f9r8z" Jan 23 06:53:16 crc kubenswrapper[4937]: I0123 06:53:16.543789 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72d8f3e-473a-4034-9b06-bed97271b052-operator-scripts\") pod \"b72d8f3e-473a-4034-9b06-bed97271b052\" (UID: \"b72d8f3e-473a-4034-9b06-bed97271b052\") " Jan 23 06:53:16 crc kubenswrapper[4937]: I0123 06:53:16.543881 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96mcr\" (UniqueName: \"kubernetes.io/projected/b72d8f3e-473a-4034-9b06-bed97271b052-kube-api-access-96mcr\") pod \"b72d8f3e-473a-4034-9b06-bed97271b052\" (UID: \"b72d8f3e-473a-4034-9b06-bed97271b052\") " Jan 23 06:53:16 crc kubenswrapper[4937]: I0123 06:53:16.544743 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b72d8f3e-473a-4034-9b06-bed97271b052-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b72d8f3e-473a-4034-9b06-bed97271b052" (UID: "b72d8f3e-473a-4034-9b06-bed97271b052"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:16 crc kubenswrapper[4937]: I0123 06:53:16.550113 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b72d8f3e-473a-4034-9b06-bed97271b052-kube-api-access-96mcr" (OuterVolumeSpecName: "kube-api-access-96mcr") pod "b72d8f3e-473a-4034-9b06-bed97271b052" (UID: "b72d8f3e-473a-4034-9b06-bed97271b052"). InnerVolumeSpecName "kube-api-access-96mcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:53:16 crc kubenswrapper[4937]: I0123 06:53:16.648466 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72d8f3e-473a-4034-9b06-bed97271b052-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:16 crc kubenswrapper[4937]: I0123 06:53:16.648498 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96mcr\" (UniqueName: \"kubernetes.io/projected/b72d8f3e-473a-4034-9b06-bed97271b052-kube-api-access-96mcr\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:17 crc kubenswrapper[4937]: I0123 06:53:17.110840 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f9r8z" event={"ID":"b72d8f3e-473a-4034-9b06-bed97271b052","Type":"ContainerDied","Data":"29d9f5eeaec998f8919b0e221d63c9720bb89e7898e744211d0cf9c6d3a2204b"} Jan 23 06:53:17 crc kubenswrapper[4937]: I0123 06:53:17.111141 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29d9f5eeaec998f8919b0e221d63c9720bb89e7898e744211d0cf9c6d3a2204b" Jan 23 06:53:17 crc kubenswrapper[4937]: I0123 06:53:17.110895 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f9r8z" Jan 23 06:53:17 crc kubenswrapper[4937]: I0123 06:53:17.113048 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"32a4f804-e737-4bf8-b092-b20127604273","Type":"ContainerStarted","Data":"b8ee6f0a82924013e7e3b630a3a7081336b8d6b9a878207828ecc1f61086f3b3"} Jan 23 06:53:17 crc kubenswrapper[4937]: I0123 06:53:17.141109 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.12831248 podStartE2EDuration="2m31.141088861s" podCreationTimestamp="2026-01-23 06:50:46 +0000 UTC" firstStartedPulling="2026-01-23 06:50:48.60376477 +0000 UTC m=+1048.407531423" lastFinishedPulling="2026-01-23 06:53:16.616541151 +0000 UTC m=+1196.420307804" observedRunningTime="2026-01-23 06:53:17.134621441 +0000 UTC m=+1196.938388094" watchObservedRunningTime="2026-01-23 06:53:17.141088861 +0000 UTC m=+1196.944855514" Jan 23 06:53:18 crc kubenswrapper[4937]: I0123 06:53:18.009710 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-f9r8z"] Jan 23 06:53:18 crc kubenswrapper[4937]: I0123 06:53:18.019314 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-f9r8z"] Jan 23 06:53:18 crc kubenswrapper[4937]: I0123 06:53:18.123983 4937 generic.go:334] "Generic (PLEG): container finished" podID="b9972df0-0d7d-4346-a77c-546a458a1677" containerID="f2f8d4ea5d2698121e047b30aaf9ae075803a4c0fa9032f85d619304329c783d" exitCode=0 Jan 23 06:53:18 crc kubenswrapper[4937]: I0123 06:53:18.124049 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dq8lz" event={"ID":"b9972df0-0d7d-4346-a77c-546a458a1677","Type":"ContainerDied","Data":"f2f8d4ea5d2698121e047b30aaf9ae075803a4c0fa9032f85d619304329c783d"} Jan 23 06:53:18 crc kubenswrapper[4937]: I0123 06:53:18.192773 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:18 crc kubenswrapper[4937]: I0123 06:53:18.192855 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:18 crc kubenswrapper[4937]: I0123 06:53:18.195703 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:18 crc kubenswrapper[4937]: I0123 06:53:18.551638 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b72d8f3e-473a-4034-9b06-bed97271b052" path="/var/lib/kubelet/pods/b72d8f3e-473a-4034-9b06-bed97271b052/volumes" Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.136140 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.532375 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.612257 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9972df0-0d7d-4346-a77c-546a458a1677-scripts\") pod \"b9972df0-0d7d-4346-a77c-546a458a1677\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.612330 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b9972df0-0d7d-4346-a77c-546a458a1677-dispersionconf\") pod \"b9972df0-0d7d-4346-a77c-546a458a1677\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.612382 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b9972df0-0d7d-4346-a77c-546a458a1677-ring-data-devices\") pod \"b9972df0-0d7d-4346-a77c-546a458a1677\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.612439 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b9972df0-0d7d-4346-a77c-546a458a1677-etc-swift\") pod \"b9972df0-0d7d-4346-a77c-546a458a1677\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.612504 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9972df0-0d7d-4346-a77c-546a458a1677-combined-ca-bundle\") pod \"b9972df0-0d7d-4346-a77c-546a458a1677\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.612531 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b9972df0-0d7d-4346-a77c-546a458a1677-swiftconf\") pod \"b9972df0-0d7d-4346-a77c-546a458a1677\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.612631 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlmmw\" (UniqueName: \"kubernetes.io/projected/b9972df0-0d7d-4346-a77c-546a458a1677-kube-api-access-hlmmw\") pod \"b9972df0-0d7d-4346-a77c-546a458a1677\" (UID: \"b9972df0-0d7d-4346-a77c-546a458a1677\") " Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.614207 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9972df0-0d7d-4346-a77c-546a458a1677-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b9972df0-0d7d-4346-a77c-546a458a1677" (UID: "b9972df0-0d7d-4346-a77c-546a458a1677"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.614267 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9972df0-0d7d-4346-a77c-546a458a1677-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b9972df0-0d7d-4346-a77c-546a458a1677" (UID: "b9972df0-0d7d-4346-a77c-546a458a1677"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.619223 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9972df0-0d7d-4346-a77c-546a458a1677-kube-api-access-hlmmw" (OuterVolumeSpecName: "kube-api-access-hlmmw") pod "b9972df0-0d7d-4346-a77c-546a458a1677" (UID: "b9972df0-0d7d-4346-a77c-546a458a1677"). InnerVolumeSpecName "kube-api-access-hlmmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.637987 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9972df0-0d7d-4346-a77c-546a458a1677-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b9972df0-0d7d-4346-a77c-546a458a1677" (UID: "b9972df0-0d7d-4346-a77c-546a458a1677"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.638221 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9972df0-0d7d-4346-a77c-546a458a1677-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9972df0-0d7d-4346-a77c-546a458a1677" (UID: "b9972df0-0d7d-4346-a77c-546a458a1677"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.638424 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9972df0-0d7d-4346-a77c-546a458a1677-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b9972df0-0d7d-4346-a77c-546a458a1677" (UID: "b9972df0-0d7d-4346-a77c-546a458a1677"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.646194 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9972df0-0d7d-4346-a77c-546a458a1677-scripts" (OuterVolumeSpecName: "scripts") pod "b9972df0-0d7d-4346-a77c-546a458a1677" (UID: "b9972df0-0d7d-4346-a77c-546a458a1677"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.715020 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlmmw\" (UniqueName: \"kubernetes.io/projected/b9972df0-0d7d-4346-a77c-546a458a1677-kube-api-access-hlmmw\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.715288 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9972df0-0d7d-4346-a77c-546a458a1677-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.715349 4937 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b9972df0-0d7d-4346-a77c-546a458a1677-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.715414 4937 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b9972df0-0d7d-4346-a77c-546a458a1677-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.715469 4937 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b9972df0-0d7d-4346-a77c-546a458a1677-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.715528 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9972df0-0d7d-4346-a77c-546a458a1677-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:19 crc kubenswrapper[4937]: I0123 06:53:19.715587 4937 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b9972df0-0d7d-4346-a77c-546a458a1677-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:20 crc kubenswrapper[4937]: I0123 06:53:20.145833 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dq8lz" event={"ID":"b9972df0-0d7d-4346-a77c-546a458a1677","Type":"ContainerDied","Data":"c4f81aca7fb681185adcdc7874b8779136fb9a16182f13cb8e7abaf3115d43d0"} Jan 23 06:53:20 crc kubenswrapper[4937]: I0123 06:53:20.146156 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4f81aca7fb681185adcdc7874b8779136fb9a16182f13cb8e7abaf3115d43d0" Jan 23 06:53:20 crc kubenswrapper[4937]: I0123 06:53:20.145970 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dq8lz" Jan 23 06:53:20 crc kubenswrapper[4937]: I0123 06:53:20.901327 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 06:53:21 crc kubenswrapper[4937]: I0123 06:53:21.609713 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-spcxp"] Jan 23 06:53:21 crc kubenswrapper[4937]: E0123 06:53:21.610306 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9972df0-0d7d-4346-a77c-546a458a1677" containerName="swift-ring-rebalance" Jan 23 06:53:21 crc kubenswrapper[4937]: I0123 06:53:21.610319 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9972df0-0d7d-4346-a77c-546a458a1677" containerName="swift-ring-rebalance" Jan 23 06:53:21 crc kubenswrapper[4937]: E0123 06:53:21.610346 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b72d8f3e-473a-4034-9b06-bed97271b052" containerName="mariadb-account-create-update" Jan 23 06:53:21 crc kubenswrapper[4937]: I0123 06:53:21.610352 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="b72d8f3e-473a-4034-9b06-bed97271b052" containerName="mariadb-account-create-update" Jan 23 06:53:21 crc kubenswrapper[4937]: I0123 06:53:21.610500 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="b72d8f3e-473a-4034-9b06-bed97271b052" containerName="mariadb-account-create-update" Jan 23 06:53:21 crc kubenswrapper[4937]: I0123 06:53:21.610515 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9972df0-0d7d-4346-a77c-546a458a1677" containerName="swift-ring-rebalance" Jan 23 06:53:21 crc kubenswrapper[4937]: I0123 06:53:21.611067 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-spcxp" Jan 23 06:53:21 crc kubenswrapper[4937]: I0123 06:53:21.613979 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 06:53:21 crc kubenswrapper[4937]: I0123 06:53:21.615337 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-spcxp"] Jan 23 06:53:21 crc kubenswrapper[4937]: I0123 06:53:21.641891 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/007d92c9-9ad1-4f61-8bd3-4cd5311b2db4-operator-scripts\") pod \"root-account-create-update-spcxp\" (UID: \"007d92c9-9ad1-4f61-8bd3-4cd5311b2db4\") " pod="openstack/root-account-create-update-spcxp" Jan 23 06:53:21 crc kubenswrapper[4937]: I0123 06:53:21.641941 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcdcj\" (UniqueName: \"kubernetes.io/projected/007d92c9-9ad1-4f61-8bd3-4cd5311b2db4-kube-api-access-jcdcj\") pod \"root-account-create-update-spcxp\" (UID: \"007d92c9-9ad1-4f61-8bd3-4cd5311b2db4\") " pod="openstack/root-account-create-update-spcxp" Jan 23 06:53:21 crc kubenswrapper[4937]: I0123 06:53:21.743334 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/007d92c9-9ad1-4f61-8bd3-4cd5311b2db4-operator-scripts\") pod \"root-account-create-update-spcxp\" (UID: \"007d92c9-9ad1-4f61-8bd3-4cd5311b2db4\") " pod="openstack/root-account-create-update-spcxp" Jan 23 06:53:21 crc kubenswrapper[4937]: I0123 06:53:21.743408 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcdcj\" (UniqueName: \"kubernetes.io/projected/007d92c9-9ad1-4f61-8bd3-4cd5311b2db4-kube-api-access-jcdcj\") pod \"root-account-create-update-spcxp\" (UID: \"007d92c9-9ad1-4f61-8bd3-4cd5311b2db4\") " pod="openstack/root-account-create-update-spcxp" Jan 23 06:53:21 crc kubenswrapper[4937]: I0123 06:53:21.744160 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/007d92c9-9ad1-4f61-8bd3-4cd5311b2db4-operator-scripts\") pod \"root-account-create-update-spcxp\" (UID: \"007d92c9-9ad1-4f61-8bd3-4cd5311b2db4\") " pod="openstack/root-account-create-update-spcxp" Jan 23 06:53:21 crc kubenswrapper[4937]: I0123 06:53:21.765866 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcdcj\" (UniqueName: \"kubernetes.io/projected/007d92c9-9ad1-4f61-8bd3-4cd5311b2db4-kube-api-access-jcdcj\") pod \"root-account-create-update-spcxp\" (UID: \"007d92c9-9ad1-4f61-8bd3-4cd5311b2db4\") " pod="openstack/root-account-create-update-spcxp" Jan 23 06:53:21 crc kubenswrapper[4937]: I0123 06:53:21.934470 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-spcxp" Jan 23 06:53:22 crc kubenswrapper[4937]: I0123 06:53:22.166372 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="32a4f804-e737-4bf8-b092-b20127604273" containerName="prometheus" containerID="cri-o://4c708104c166802c9cecdab7bd1b2cc9a2b4aef3d884ca4fadbb72584caf0935" gracePeriod=600 Jan 23 06:53:22 crc kubenswrapper[4937]: I0123 06:53:22.166677 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="32a4f804-e737-4bf8-b092-b20127604273" containerName="thanos-sidecar" containerID="cri-o://b8ee6f0a82924013e7e3b630a3a7081336b8d6b9a878207828ecc1f61086f3b3" gracePeriod=600 Jan 23 06:53:22 crc kubenswrapper[4937]: I0123 06:53:22.166700 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="32a4f804-e737-4bf8-b092-b20127604273" containerName="config-reloader" containerID="cri-o://c5ec3c5460b6b5574ecf121c7ceb7dcfbb0146f2ad6bedfaf4bd795d46f99862" gracePeriod=600 Jan 23 06:53:22 crc kubenswrapper[4937]: I0123 06:53:22.745655 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-spcxp"] Jan 23 06:53:22 crc kubenswrapper[4937]: W0123 06:53:22.749686 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod007d92c9_9ad1_4f61_8bd3_4cd5311b2db4.slice/crio-37c9a6e955e26c46ef4459b207d1ab2a27174afebff138be1ca517120cd4a9fe WatchSource:0}: Error finding container 37c9a6e955e26c46ef4459b207d1ab2a27174afebff138be1ca517120cd4a9fe: Status 404 returned error can't find the container with id 37c9a6e955e26c46ef4459b207d1ab2a27174afebff138be1ca517120cd4a9fe Jan 23 06:53:23 crc kubenswrapper[4937]: I0123 06:53:23.172287 4937 generic.go:334] "Generic (PLEG): container finished" podID="32a4f804-e737-4bf8-b092-b20127604273" containerID="b8ee6f0a82924013e7e3b630a3a7081336b8d6b9a878207828ecc1f61086f3b3" exitCode=0 Jan 23 06:53:23 crc kubenswrapper[4937]: I0123 06:53:23.172318 4937 generic.go:334] "Generic (PLEG): container finished" podID="32a4f804-e737-4bf8-b092-b20127604273" containerID="c5ec3c5460b6b5574ecf121c7ceb7dcfbb0146f2ad6bedfaf4bd795d46f99862" exitCode=0 Jan 23 06:53:23 crc kubenswrapper[4937]: I0123 06:53:23.172327 4937 generic.go:334] "Generic (PLEG): container finished" podID="32a4f804-e737-4bf8-b092-b20127604273" containerID="4c708104c166802c9cecdab7bd1b2cc9a2b4aef3d884ca4fadbb72584caf0935" exitCode=0 Jan 23 06:53:23 crc kubenswrapper[4937]: I0123 06:53:23.172383 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"32a4f804-e737-4bf8-b092-b20127604273","Type":"ContainerDied","Data":"b8ee6f0a82924013e7e3b630a3a7081336b8d6b9a878207828ecc1f61086f3b3"} Jan 23 06:53:23 crc kubenswrapper[4937]: I0123 06:53:23.172419 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"32a4f804-e737-4bf8-b092-b20127604273","Type":"ContainerDied","Data":"c5ec3c5460b6b5574ecf121c7ceb7dcfbb0146f2ad6bedfaf4bd795d46f99862"} Jan 23 06:53:23 crc kubenswrapper[4937]: I0123 06:53:23.172432 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"32a4f804-e737-4bf8-b092-b20127604273","Type":"ContainerDied","Data":"4c708104c166802c9cecdab7bd1b2cc9a2b4aef3d884ca4fadbb72584caf0935"} Jan 23 06:53:23 crc kubenswrapper[4937]: I0123 06:53:23.173877 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-spcxp" event={"ID":"007d92c9-9ad1-4f61-8bd3-4cd5311b2db4","Type":"ContainerStarted","Data":"37c9a6e955e26c46ef4459b207d1ab2a27174afebff138be1ca517120cd4a9fe"} Jan 23 06:53:23 crc kubenswrapper[4937]: I0123 06:53:23.194058 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="32a4f804-e737-4bf8-b092-b20127604273" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.113:9090/-/ready\": dial tcp 10.217.0.113:9090: connect: connection refused" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.001562 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.181460 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/32a4f804-e737-4bf8-b092-b20127604273-config-out\") pod \"32a4f804-e737-4bf8-b092-b20127604273\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.181535 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/32a4f804-e737-4bf8-b092-b20127604273-prometheus-metric-storage-rulefiles-0\") pod \"32a4f804-e737-4bf8-b092-b20127604273\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.181567 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/32a4f804-e737-4bf8-b092-b20127604273-web-config\") pod \"32a4f804-e737-4bf8-b092-b20127604273\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.181651 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/32a4f804-e737-4bf8-b092-b20127604273-thanos-prometheus-http-client-file\") pod \"32a4f804-e737-4bf8-b092-b20127604273\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.182310 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/32a4f804-e737-4bf8-b092-b20127604273-config\") pod \"32a4f804-e737-4bf8-b092-b20127604273\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.182374 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32a4f804-e737-4bf8-b092-b20127604273-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "32a4f804-e737-4bf8-b092-b20127604273" (UID: "32a4f804-e737-4bf8-b092-b20127604273"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.182441 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\") pod \"32a4f804-e737-4bf8-b092-b20127604273\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.182481 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/32a4f804-e737-4bf8-b092-b20127604273-tls-assets\") pod \"32a4f804-e737-4bf8-b092-b20127604273\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.182541 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/32a4f804-e737-4bf8-b092-b20127604273-prometheus-metric-storage-rulefiles-1\") pod \"32a4f804-e737-4bf8-b092-b20127604273\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.182607 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vrhg\" (UniqueName: \"kubernetes.io/projected/32a4f804-e737-4bf8-b092-b20127604273-kube-api-access-9vrhg\") pod \"32a4f804-e737-4bf8-b092-b20127604273\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.182632 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/32a4f804-e737-4bf8-b092-b20127604273-prometheus-metric-storage-rulefiles-2\") pod \"32a4f804-e737-4bf8-b092-b20127604273\" (UID: \"32a4f804-e737-4bf8-b092-b20127604273\") " Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.184229 4937 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/32a4f804-e737-4bf8-b092-b20127604273-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.187481 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32a4f804-e737-4bf8-b092-b20127604273-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "32a4f804-e737-4bf8-b092-b20127604273" (UID: "32a4f804-e737-4bf8-b092-b20127604273"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.188046 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32a4f804-e737-4bf8-b092-b20127604273-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "32a4f804-e737-4bf8-b092-b20127604273" (UID: "32a4f804-e737-4bf8-b092-b20127604273"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.189979 4937 generic.go:334] "Generic (PLEG): container finished" podID="4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e" containerID="93a6e67502750b306bf7e3beb72bab1c0ae18bd87867fc23d07a2735becce099" exitCode=0 Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.190044 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e","Type":"ContainerDied","Data":"93a6e67502750b306bf7e3beb72bab1c0ae18bd87867fc23d07a2735becce099"} Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.195021 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a4f804-e737-4bf8-b092-b20127604273-config" (OuterVolumeSpecName: "config") pod "32a4f804-e737-4bf8-b092-b20127604273" (UID: "32a4f804-e737-4bf8-b092-b20127604273"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.195257 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a4f804-e737-4bf8-b092-b20127604273-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "32a4f804-e737-4bf8-b092-b20127604273" (UID: "32a4f804-e737-4bf8-b092-b20127604273"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.195048 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a4f804-e737-4bf8-b092-b20127604273-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "32a4f804-e737-4bf8-b092-b20127604273" (UID: "32a4f804-e737-4bf8-b092-b20127604273"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.195159 4937 generic.go:334] "Generic (PLEG): container finished" podID="c22daa68-7c34-4180-adcc-d939bfa5a607" containerID="1061c71e90e3c339192a8927fc1c04e92e12d63bd856ba64167558bd0307450c" exitCode=0 Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.195340 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a4f804-e737-4bf8-b092-b20127604273-kube-api-access-9vrhg" (OuterVolumeSpecName: "kube-api-access-9vrhg") pod "32a4f804-e737-4bf8-b092-b20127604273" (UID: "32a4f804-e737-4bf8-b092-b20127604273"). InnerVolumeSpecName "kube-api-access-9vrhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.195189 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c22daa68-7c34-4180-adcc-d939bfa5a607","Type":"ContainerDied","Data":"1061c71e90e3c339192a8927fc1c04e92e12d63bd856ba64167558bd0307450c"} Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.198216 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"75b79f91-7f35-4e37-9fd8-2ada0ad723df","Type":"ContainerStarted","Data":"b8034b3c2d6a67363e1bbec2a25bf6af3e7cbdf04f1e0837a07328435580992d"} Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.199345 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.205929 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32a4f804-e737-4bf8-b092-b20127604273-config-out" (OuterVolumeSpecName: "config-out") pod "32a4f804-e737-4bf8-b092-b20127604273" (UID: "32a4f804-e737-4bf8-b092-b20127604273"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.206289 4937 generic.go:334] "Generic (PLEG): container finished" podID="c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" containerID="ed4d6735eacad8b5669c077204ba76ee680315c08659282ac0774ec434cbe540" exitCode=0 Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.206456 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6","Type":"ContainerDied","Data":"ed4d6735eacad8b5669c077204ba76ee680315c08659282ac0774ec434cbe540"} Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.209422 4937 generic.go:334] "Generic (PLEG): container finished" podID="007d92c9-9ad1-4f61-8bd3-4cd5311b2db4" containerID="3071216e4a6d08febd919524511b485190bef2bff32b13cc5bd21a36385339ad" exitCode=0 Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.209713 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-spcxp" event={"ID":"007d92c9-9ad1-4f61-8bd3-4cd5311b2db4","Type":"ContainerDied","Data":"3071216e4a6d08febd919524511b485190bef2bff32b13cc5bd21a36385339ad"} Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.216440 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"32a4f804-e737-4bf8-b092-b20127604273","Type":"ContainerDied","Data":"787975cede018e1aec12442889b9218c8dd106e75aeb16de18eecd2421b651c0"} Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.216498 4937 scope.go:117] "RemoveContainer" containerID="b8ee6f0a82924013e7e3b630a3a7081336b8d6b9a878207828ecc1f61086f3b3" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.216677 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.222844 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a4f804-e737-4bf8-b092-b20127604273-web-config" (OuterVolumeSpecName: "web-config") pod "32a4f804-e737-4bf8-b092-b20127604273" (UID: "32a4f804-e737-4bf8-b092-b20127604273"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.241918 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "32a4f804-e737-4bf8-b092-b20127604273" (UID: "32a4f804-e737-4bf8-b092-b20127604273"). InnerVolumeSpecName "pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.266813 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.276644137 podStartE2EDuration="2m38.266796421s" podCreationTimestamp="2026-01-23 06:50:46 +0000 UTC" firstStartedPulling="2026-01-23 06:50:47.361504587 +0000 UTC m=+1047.165271240" lastFinishedPulling="2026-01-23 06:53:22.351656861 +0000 UTC m=+1202.155423524" observedRunningTime="2026-01-23 06:53:24.25534994 +0000 UTC m=+1204.059116603" watchObservedRunningTime="2026-01-23 06:53:24.266796421 +0000 UTC m=+1204.070563074" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.285244 4937 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/32a4f804-e737-4bf8-b092-b20127604273-config-out\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.285272 4937 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/32a4f804-e737-4bf8-b092-b20127604273-web-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.285282 4937 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/32a4f804-e737-4bf8-b092-b20127604273-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.285316 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/32a4f804-e737-4bf8-b092-b20127604273-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.285341 4937 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\") on node \"crc\" " Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.285351 4937 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/32a4f804-e737-4bf8-b092-b20127604273-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.285360 4937 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/32a4f804-e737-4bf8-b092-b20127604273-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.285390 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vrhg\" (UniqueName: \"kubernetes.io/projected/32a4f804-e737-4bf8-b092-b20127604273-kube-api-access-9vrhg\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.285400 4937 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/32a4f804-e737-4bf8-b092-b20127604273-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.339807 4937 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.340895 4937 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6") on node "crc" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.349465 4937 scope.go:117] "RemoveContainer" containerID="c5ec3c5460b6b5574ecf121c7ceb7dcfbb0146f2ad6bedfaf4bd795d46f99862" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.387152 4937 reconciler_common.go:293] "Volume detached for volume \"pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.392692 4937 scope.go:117] "RemoveContainer" containerID="4c708104c166802c9cecdab7bd1b2cc9a2b4aef3d884ca4fadbb72584caf0935" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.427228 4937 scope.go:117] "RemoveContainer" containerID="b04ebbb333d4f10ff1ada90bae837ff31fbc2c42a92578f91421af008c398fda" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.550123 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.558487 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.581659 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 06:53:24 crc kubenswrapper[4937]: E0123 06:53:24.582088 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a4f804-e737-4bf8-b092-b20127604273" containerName="prometheus" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.582110 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a4f804-e737-4bf8-b092-b20127604273" containerName="prometheus" Jan 23 06:53:24 crc kubenswrapper[4937]: E0123 06:53:24.582141 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a4f804-e737-4bf8-b092-b20127604273" containerName="init-config-reloader" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.582151 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a4f804-e737-4bf8-b092-b20127604273" containerName="init-config-reloader" Jan 23 06:53:24 crc kubenswrapper[4937]: E0123 06:53:24.582161 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a4f804-e737-4bf8-b092-b20127604273" containerName="thanos-sidecar" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.582169 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a4f804-e737-4bf8-b092-b20127604273" containerName="thanos-sidecar" Jan 23 06:53:24 crc kubenswrapper[4937]: E0123 06:53:24.582184 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a4f804-e737-4bf8-b092-b20127604273" containerName="config-reloader" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.582192 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a4f804-e737-4bf8-b092-b20127604273" containerName="config-reloader" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.582380 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a4f804-e737-4bf8-b092-b20127604273" containerName="prometheus" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.582406 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a4f804-e737-4bf8-b092-b20127604273" containerName="config-reloader" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.582422 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a4f804-e737-4bf8-b092-b20127604273" containerName="thanos-sidecar" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.585461 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.590237 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.590371 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.590538 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.590432 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.590824 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.591099 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.591165 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.593800 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-zrrzk" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.608295 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.613929 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.691930 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-config\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.691993 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.692046 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.692066 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.692092 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.692127 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.692153 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.692210 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.692329 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.692353 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.692381 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.692410 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.692433 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n4sx\" (UniqueName: \"kubernetes.io/projected/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-kube-api-access-7n4sx\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.794445 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.794504 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.794529 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.794562 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.794607 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n4sx\" (UniqueName: \"kubernetes.io/projected/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-kube-api-access-7n4sx\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.794648 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-config\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.794683 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.794743 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.794760 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.794787 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.794814 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.794832 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.794887 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.795611 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.796049 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.797224 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.800169 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.800318 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.800560 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.800975 4937 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.801005 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ad88d819a9190e54c498f5e1a4ce0a9fbf70213240e060c76a145dc64e923a6c/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.801494 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.802299 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.803647 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.807410 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.810470 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-config\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.816764 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n4sx\" (UniqueName: \"kubernetes.io/projected/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-kube-api-access-7n4sx\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.830083 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\") pod \"prometheus-metric-storage-0\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:24 crc kubenswrapper[4937]: I0123 06:53:24.908651 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:25 crc kubenswrapper[4937]: I0123 06:53:25.243083 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c22daa68-7c34-4180-adcc-d939bfa5a607","Type":"ContainerStarted","Data":"1150dab81040022d4c2b88d4e39c49a7006fee9c4e027739d4bb4730700b48b0"} Jan 23 06:53:25 crc kubenswrapper[4937]: I0123 06:53:25.243703 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:53:25 crc kubenswrapper[4937]: I0123 06:53:25.246781 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6","Type":"ContainerStarted","Data":"622c0fd4f9c73a1bee1b8531d0f45bda90fe44028f07984f086365d5b29e13b1"} Jan 23 06:53:25 crc kubenswrapper[4937]: I0123 06:53:25.247046 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 06:53:25 crc kubenswrapper[4937]: I0123 06:53:25.292484 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e","Type":"ContainerStarted","Data":"46e15b121c2e7fff015fe8ed24122bfdb4d070ff58b46eea7050722a365346ac"} Jan 23 06:53:25 crc kubenswrapper[4937]: I0123 06:53:25.293009 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:53:25 crc kubenswrapper[4937]: I0123 06:53:25.338506 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-notifications-server-0" podStartSLOduration=40.076675744 podStartE2EDuration="2m45.338489755s" podCreationTimestamp="2026-01-23 06:50:40 +0000 UTC" firstStartedPulling="2026-01-23 06:50:43.174012422 +0000 UTC m=+1042.977779075" lastFinishedPulling="2026-01-23 06:52:48.435826433 +0000 UTC m=+1168.239593086" observedRunningTime="2026-01-23 06:53:25.335313352 +0000 UTC m=+1205.139080015" watchObservedRunningTime="2026-01-23 06:53:25.338489755 +0000 UTC m=+1205.142256408" Jan 23 06:53:25 crc kubenswrapper[4937]: I0123 06:53:25.339090 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.986915151 podStartE2EDuration="2m46.339084581s" podCreationTimestamp="2026-01-23 06:50:39 +0000 UTC" firstStartedPulling="2026-01-23 06:50:43.075747245 +0000 UTC m=+1042.879513898" lastFinishedPulling="2026-01-23 06:52:48.427916675 +0000 UTC m=+1168.231683328" observedRunningTime="2026-01-23 06:53:25.308127835 +0000 UTC m=+1205.111894508" watchObservedRunningTime="2026-01-23 06:53:25.339084581 +0000 UTC m=+1205.142851234" Jan 23 06:53:25 crc kubenswrapper[4937]: I0123 06:53:25.360506 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=43.961318646 podStartE2EDuration="2m46.360490915s" podCreationTimestamp="2026-01-23 06:50:39 +0000 UTC" firstStartedPulling="2026-01-23 06:50:43.071326616 +0000 UTC m=+1042.875093259" lastFinishedPulling="2026-01-23 06:52:45.470498855 +0000 UTC m=+1165.274265528" observedRunningTime="2026-01-23 06:53:25.355841173 +0000 UTC m=+1205.159607816" watchObservedRunningTime="2026-01-23 06:53:25.360490915 +0000 UTC m=+1205.164257568" Jan 23 06:53:25 crc kubenswrapper[4937]: I0123 06:53:25.399058 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:25.620168 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-spcxp" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:25.718018 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcdcj\" (UniqueName: \"kubernetes.io/projected/007d92c9-9ad1-4f61-8bd3-4cd5311b2db4-kube-api-access-jcdcj\") pod \"007d92c9-9ad1-4f61-8bd3-4cd5311b2db4\" (UID: \"007d92c9-9ad1-4f61-8bd3-4cd5311b2db4\") " Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:25.718180 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/007d92c9-9ad1-4f61-8bd3-4cd5311b2db4-operator-scripts\") pod \"007d92c9-9ad1-4f61-8bd3-4cd5311b2db4\" (UID: \"007d92c9-9ad1-4f61-8bd3-4cd5311b2db4\") " Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:25.719009 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/007d92c9-9ad1-4f61-8bd3-4cd5311b2db4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "007d92c9-9ad1-4f61-8bd3-4cd5311b2db4" (UID: "007d92c9-9ad1-4f61-8bd3-4cd5311b2db4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:25.722162 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/007d92c9-9ad1-4f61-8bd3-4cd5311b2db4-kube-api-access-jcdcj" (OuterVolumeSpecName: "kube-api-access-jcdcj") pod "007d92c9-9ad1-4f61-8bd3-4cd5311b2db4" (UID: "007d92c9-9ad1-4f61-8bd3-4cd5311b2db4"). InnerVolumeSpecName "kube-api-access-jcdcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:25.819820 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/007d92c9-9ad1-4f61-8bd3-4cd5311b2db4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:25.819847 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcdcj\" (UniqueName: \"kubernetes.io/projected/007d92c9-9ad1-4f61-8bd3-4cd5311b2db4-kube-api-access-jcdcj\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:26.300018 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"721219b7-bb75-49ef-8bbc-09ca3e7eb76f","Type":"ContainerStarted","Data":"e0c09c71b2a0014a01ac24f03eccf5b9894bf0c6e617fa2f5de10dc582029b54"} Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:26.301916 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-spcxp" event={"ID":"007d92c9-9ad1-4f61-8bd3-4cd5311b2db4","Type":"ContainerDied","Data":"37c9a6e955e26c46ef4459b207d1ab2a27174afebff138be1ca517120cd4a9fe"} Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:26.301954 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-spcxp" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:26.301966 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37c9a6e955e26c46ef4459b207d1ab2a27174afebff138be1ca517120cd4a9fe" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:26.538178 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32a4f804-e737-4bf8-b092-b20127604273" path="/var/lib/kubelet/pods/32a4f804-e737-4bf8-b092-b20127604273/volumes" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:28.035077 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-spcxp"] Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:28.041957 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-spcxp"] Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:28.535948 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="007d92c9-9ad1-4f61-8bd3-4cd5311b2db4" path="/var/lib/kubelet/pods/007d92c9-9ad1-4f61-8bd3-4cd5311b2db4/volumes" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:29.890449 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:29.898821 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e96a6620-5e97-4f3b-95b3-52c8b3161098-etc-swift\") pod \"swift-storage-0\" (UID: \"e96a6620-5e97-4f3b-95b3-52c8b3161098\") " pod="openstack/swift-storage-0" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:30.139873 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:31.658963 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-smx28"] Jan 23 06:53:33 crc kubenswrapper[4937]: E0123 06:53:31.660043 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="007d92c9-9ad1-4f61-8bd3-4cd5311b2db4" containerName="mariadb-account-create-update" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:31.660070 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="007d92c9-9ad1-4f61-8bd3-4cd5311b2db4" containerName="mariadb-account-create-update" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:31.660237 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="007d92c9-9ad1-4f61-8bd3-4cd5311b2db4" containerName="mariadb-account-create-update" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:31.660861 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-smx28" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:31.663618 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:31.671950 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-smx28"] Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:31.718196 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343f05e1-212e-4256-8ecd-0f900dca8035-operator-scripts\") pod \"root-account-create-update-smx28\" (UID: \"343f05e1-212e-4256-8ecd-0f900dca8035\") " pod="openstack/root-account-create-update-smx28" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:31.718265 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4znms\" (UniqueName: \"kubernetes.io/projected/343f05e1-212e-4256-8ecd-0f900dca8035-kube-api-access-4znms\") pod \"root-account-create-update-smx28\" (UID: \"343f05e1-212e-4256-8ecd-0f900dca8035\") " pod="openstack/root-account-create-update-smx28" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:31.820117 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343f05e1-212e-4256-8ecd-0f900dca8035-operator-scripts\") pod \"root-account-create-update-smx28\" (UID: \"343f05e1-212e-4256-8ecd-0f900dca8035\") " pod="openstack/root-account-create-update-smx28" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:31.820177 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4znms\" (UniqueName: \"kubernetes.io/projected/343f05e1-212e-4256-8ecd-0f900dca8035-kube-api-access-4znms\") pod \"root-account-create-update-smx28\" (UID: \"343f05e1-212e-4256-8ecd-0f900dca8035\") " pod="openstack/root-account-create-update-smx28" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:31.821744 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343f05e1-212e-4256-8ecd-0f900dca8035-operator-scripts\") pod \"root-account-create-update-smx28\" (UID: \"343f05e1-212e-4256-8ecd-0f900dca8035\") " pod="openstack/root-account-create-update-smx28" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:31.839126 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4znms\" (UniqueName: \"kubernetes.io/projected/343f05e1-212e-4256-8ecd-0f900dca8035-kube-api-access-4znms\") pod \"root-account-create-update-smx28\" (UID: \"343f05e1-212e-4256-8ecd-0f900dca8035\") " pod="openstack/root-account-create-update-smx28" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:31.984546 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-smx28" Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:33.363496 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"721219b7-bb75-49ef-8bbc-09ca3e7eb76f","Type":"ContainerStarted","Data":"96825496f88e9042d001820cd8e9f9be44a82698b5406a8f35dd4bd32a907bc2"} Jan 23 06:53:33 crc kubenswrapper[4937]: I0123 06:53:33.865804 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-smx28"] Jan 23 06:53:34 crc kubenswrapper[4937]: I0123 06:53:34.024014 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 23 06:53:34 crc kubenswrapper[4937]: W0123 06:53:34.029039 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode96a6620_5e97_4f3b_95b3_52c8b3161098.slice/crio-e0ec7ba857651a3a545b28d361a1b51b02a0a69f3adef17697c4873cd5eec19d WatchSource:0}: Error finding container e0ec7ba857651a3a545b28d361a1b51b02a0a69f3adef17697c4873cd5eec19d: Status 404 returned error can't find the container with id e0ec7ba857651a3a545b28d361a1b51b02a0a69f3adef17697c4873cd5eec19d Jan 23 06:53:34 crc kubenswrapper[4937]: I0123 06:53:34.371553 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e96a6620-5e97-4f3b-95b3-52c8b3161098","Type":"ContainerStarted","Data":"e0ec7ba857651a3a545b28d361a1b51b02a0a69f3adef17697c4873cd5eec19d"} Jan 23 06:53:34 crc kubenswrapper[4937]: I0123 06:53:34.373379 4937 generic.go:334] "Generic (PLEG): container finished" podID="343f05e1-212e-4256-8ecd-0f900dca8035" containerID="a14c265dc9b31f5d1cbc3eefc7cee11f8424393cad9bfe6e4ec011e27248bc82" exitCode=0 Jan 23 06:53:34 crc kubenswrapper[4937]: I0123 06:53:34.374677 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-smx28" event={"ID":"343f05e1-212e-4256-8ecd-0f900dca8035","Type":"ContainerDied","Data":"a14c265dc9b31f5d1cbc3eefc7cee11f8424393cad9bfe6e4ec011e27248bc82"} Jan 23 06:53:34 crc kubenswrapper[4937]: I0123 06:53:34.374705 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-smx28" event={"ID":"343f05e1-212e-4256-8ecd-0f900dca8035","Type":"ContainerStarted","Data":"7bd03ffe8c66a1df560e5be5e29993697c449233b8e523ca5c76ace9998f9aaf"} Jan 23 06:53:35 crc kubenswrapper[4937]: I0123 06:53:35.381823 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e96a6620-5e97-4f3b-95b3-52c8b3161098","Type":"ContainerStarted","Data":"691fd6bd1e737e09840ae0e83f66b7fec15f07192bb2627526f0925692efd7a4"} Jan 23 06:53:35 crc kubenswrapper[4937]: I0123 06:53:35.382119 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e96a6620-5e97-4f3b-95b3-52c8b3161098","Type":"ContainerStarted","Data":"5fe786d122c0deb16db0c524324fca6f3d7b7d2b901085242ffb77920a2ae4bf"} Jan 23 06:53:35 crc kubenswrapper[4937]: I0123 06:53:35.382132 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e96a6620-5e97-4f3b-95b3-52c8b3161098","Type":"ContainerStarted","Data":"8df65a2a97880a204630d6a379fd99f1ea903718eadcca0cf675dbcedecaab21"} Jan 23 06:53:35 crc kubenswrapper[4937]: I0123 06:53:35.641932 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-smx28" Jan 23 06:53:35 crc kubenswrapper[4937]: I0123 06:53:35.694771 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343f05e1-212e-4256-8ecd-0f900dca8035-operator-scripts\") pod \"343f05e1-212e-4256-8ecd-0f900dca8035\" (UID: \"343f05e1-212e-4256-8ecd-0f900dca8035\") " Jan 23 06:53:35 crc kubenswrapper[4937]: I0123 06:53:35.694824 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4znms\" (UniqueName: \"kubernetes.io/projected/343f05e1-212e-4256-8ecd-0f900dca8035-kube-api-access-4znms\") pod \"343f05e1-212e-4256-8ecd-0f900dca8035\" (UID: \"343f05e1-212e-4256-8ecd-0f900dca8035\") " Jan 23 06:53:35 crc kubenswrapper[4937]: I0123 06:53:35.695530 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/343f05e1-212e-4256-8ecd-0f900dca8035-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "343f05e1-212e-4256-8ecd-0f900dca8035" (UID: "343f05e1-212e-4256-8ecd-0f900dca8035"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:35 crc kubenswrapper[4937]: I0123 06:53:35.700476 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/343f05e1-212e-4256-8ecd-0f900dca8035-kube-api-access-4znms" (OuterVolumeSpecName: "kube-api-access-4znms") pod "343f05e1-212e-4256-8ecd-0f900dca8035" (UID: "343f05e1-212e-4256-8ecd-0f900dca8035"). InnerVolumeSpecName "kube-api-access-4znms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:53:35 crc kubenswrapper[4937]: I0123 06:53:35.797300 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4znms\" (UniqueName: \"kubernetes.io/projected/343f05e1-212e-4256-8ecd-0f900dca8035-kube-api-access-4znms\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:35 crc kubenswrapper[4937]: I0123 06:53:35.797350 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343f05e1-212e-4256-8ecd-0f900dca8035-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:36 crc kubenswrapper[4937]: I0123 06:53:36.394150 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e96a6620-5e97-4f3b-95b3-52c8b3161098","Type":"ContainerStarted","Data":"6efc843dec512ae29ee2f0e2299d21f103be0851d56169abb9ef159e04e6ac17"} Jan 23 06:53:36 crc kubenswrapper[4937]: I0123 06:53:36.408007 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-smx28" event={"ID":"343f05e1-212e-4256-8ecd-0f900dca8035","Type":"ContainerDied","Data":"7bd03ffe8c66a1df560e5be5e29993697c449233b8e523ca5c76ace9998f9aaf"} Jan 23 06:53:36 crc kubenswrapper[4937]: I0123 06:53:36.408327 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bd03ffe8c66a1df560e5be5e29993697c449233b8e523ca5c76ace9998f9aaf" Jan 23 06:53:36 crc kubenswrapper[4937]: I0123 06:53:36.408389 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-smx28" Jan 23 06:53:36 crc kubenswrapper[4937]: I0123 06:53:36.604278 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 06:53:37 crc kubenswrapper[4937]: I0123 06:53:37.438937 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e96a6620-5e97-4f3b-95b3-52c8b3161098","Type":"ContainerStarted","Data":"217b00a76b66a716d87e0e59fd5139cbf40b55d80b4bb2e68a086bfe788dbfed"} Jan 23 06:53:37 crc kubenswrapper[4937]: I0123 06:53:37.440583 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e96a6620-5e97-4f3b-95b3-52c8b3161098","Type":"ContainerStarted","Data":"1466617f19bd27e25a402f06174b125c6856a0813001755e270436d1fdcf4189"} Jan 23 06:53:37 crc kubenswrapper[4937]: I0123 06:53:37.440700 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e96a6620-5e97-4f3b-95b3-52c8b3161098","Type":"ContainerStarted","Data":"7e016e4dd2f757a3e348dafd0062500b702a3c3976c8bb5da6b306585b098a78"} Jan 23 06:53:37 crc kubenswrapper[4937]: I0123 06:53:37.440773 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e96a6620-5e97-4f3b-95b3-52c8b3161098","Type":"ContainerStarted","Data":"476ef72793a6ab099c051acf8da2ba9052b85c8fcf77ba8de5afb7995bb0b487"} Jan 23 06:53:38 crc kubenswrapper[4937]: I0123 06:53:38.068778 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-smx28"] Jan 23 06:53:38 crc kubenswrapper[4937]: I0123 06:53:38.074872 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-smx28"] Jan 23 06:53:38 crc kubenswrapper[4937]: I0123 06:53:38.451983 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e96a6620-5e97-4f3b-95b3-52c8b3161098","Type":"ContainerStarted","Data":"93c37d436219f5f926c1089c4b37286a2b8c1cc9f247e59ff8e98c4508ec06e2"} Jan 23 06:53:38 crc kubenswrapper[4937]: I0123 06:53:38.452024 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e96a6620-5e97-4f3b-95b3-52c8b3161098","Type":"ContainerStarted","Data":"4f8757ba511a76562bc9d3efce860ce49c2b8a6176da714c635062d03ef1a574"} Jan 23 06:53:38 crc kubenswrapper[4937]: I0123 06:53:38.452037 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e96a6620-5e97-4f3b-95b3-52c8b3161098","Type":"ContainerStarted","Data":"ef8b434c4ee413f984f0e50588dca80cc0967830b0f5e02f675d8e481a101649"} Jan 23 06:53:38 crc kubenswrapper[4937]: I0123 06:53:38.536206 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="343f05e1-212e-4256-8ecd-0f900dca8035" path="/var/lib/kubelet/pods/343f05e1-212e-4256-8ecd-0f900dca8035/volumes" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.460517 4937 generic.go:334] "Generic (PLEG): container finished" podID="721219b7-bb75-49ef-8bbc-09ca3e7eb76f" containerID="96825496f88e9042d001820cd8e9f9be44a82698b5406a8f35dd4bd32a907bc2" exitCode=0 Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.460612 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"721219b7-bb75-49ef-8bbc-09ca3e7eb76f","Type":"ContainerDied","Data":"96825496f88e9042d001820cd8e9f9be44a82698b5406a8f35dd4bd32a907bc2"} Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.467782 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e96a6620-5e97-4f3b-95b3-52c8b3161098","Type":"ContainerStarted","Data":"4a74ba51a522b42c598eb2258c80858085690facd47064f52b76a50dfe19fbc7"} Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.467824 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e96a6620-5e97-4f3b-95b3-52c8b3161098","Type":"ContainerStarted","Data":"8e4ccbbe3540c186184dcd017651d123a7414d311a258a490866beb9b0d41516"} Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.467840 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e96a6620-5e97-4f3b-95b3-52c8b3161098","Type":"ContainerStarted","Data":"63f71204f30e221af23754940816ee84690e551027184632f82f69a236cd5fe3"} Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.467851 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e96a6620-5e97-4f3b-95b3-52c8b3161098","Type":"ContainerStarted","Data":"321cf86e4a608916b18585b7e3bd3b690e077ef7e3af28ee6cd1a56e2c69ea08"} Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.553496 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=39.967928380000004 podStartE2EDuration="43.553477426s" podCreationTimestamp="2026-01-23 06:52:56 +0000 UTC" firstStartedPulling="2026-01-23 06:53:34.038262636 +0000 UTC m=+1213.842029289" lastFinishedPulling="2026-01-23 06:53:37.623811662 +0000 UTC m=+1217.427578335" observedRunningTime="2026-01-23 06:53:39.549219134 +0000 UTC m=+1219.352985797" watchObservedRunningTime="2026-01-23 06:53:39.553477426 +0000 UTC m=+1219.357244079" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.824746 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79db9b68b9-j4qgx"] Jan 23 06:53:39 crc kubenswrapper[4937]: E0123 06:53:39.825370 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="343f05e1-212e-4256-8ecd-0f900dca8035" containerName="mariadb-account-create-update" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.825393 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="343f05e1-212e-4256-8ecd-0f900dca8035" containerName="mariadb-account-create-update" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.825578 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="343f05e1-212e-4256-8ecd-0f900dca8035" containerName="mariadb-account-create-update" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.826493 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.848180 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.869225 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79db9b68b9-j4qgx"] Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.895708 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn5p7\" (UniqueName: \"kubernetes.io/projected/73fc9957-a295-493b-8c78-3cb99b56584c-kube-api-access-qn5p7\") pod \"dnsmasq-dns-79db9b68b9-j4qgx\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.895762 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-config\") pod \"dnsmasq-dns-79db9b68b9-j4qgx\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.895790 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-ovsdbserver-nb\") pod \"dnsmasq-dns-79db9b68b9-j4qgx\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.895833 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-ovsdbserver-sb\") pod \"dnsmasq-dns-79db9b68b9-j4qgx\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.895852 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-dns-svc\") pod \"dnsmasq-dns-79db9b68b9-j4qgx\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.895904 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-dns-swift-storage-0\") pod \"dnsmasq-dns-79db9b68b9-j4qgx\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.996944 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-dns-swift-storage-0\") pod \"dnsmasq-dns-79db9b68b9-j4qgx\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.997093 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn5p7\" (UniqueName: \"kubernetes.io/projected/73fc9957-a295-493b-8c78-3cb99b56584c-kube-api-access-qn5p7\") pod \"dnsmasq-dns-79db9b68b9-j4qgx\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.997520 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-config\") pod \"dnsmasq-dns-79db9b68b9-j4qgx\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.998075 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-dns-swift-storage-0\") pod \"dnsmasq-dns-79db9b68b9-j4qgx\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.998227 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-config\") pod \"dnsmasq-dns-79db9b68b9-j4qgx\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.998285 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-ovsdbserver-nb\") pod \"dnsmasq-dns-79db9b68b9-j4qgx\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.998846 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-ovsdbserver-nb\") pod \"dnsmasq-dns-79db9b68b9-j4qgx\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.998899 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-ovsdbserver-sb\") pod \"dnsmasq-dns-79db9b68b9-j4qgx\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.998920 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-dns-svc\") pod \"dnsmasq-dns-79db9b68b9-j4qgx\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.999444 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-ovsdbserver-sb\") pod \"dnsmasq-dns-79db9b68b9-j4qgx\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:39 crc kubenswrapper[4937]: I0123 06:53:39.999750 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-dns-svc\") pod \"dnsmasq-dns-79db9b68b9-j4qgx\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:40 crc kubenswrapper[4937]: I0123 06:53:40.020766 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn5p7\" (UniqueName: \"kubernetes.io/projected/73fc9957-a295-493b-8c78-3cb99b56584c-kube-api-access-qn5p7\") pod \"dnsmasq-dns-79db9b68b9-j4qgx\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:40 crc kubenswrapper[4937]: I0123 06:53:40.144211 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:40 crc kubenswrapper[4937]: I0123 06:53:40.479834 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"721219b7-bb75-49ef-8bbc-09ca3e7eb76f","Type":"ContainerStarted","Data":"7eefb9ddb5f340653c16c1993accd8663d5abf88cda48eacd754eb67bad3619e"} Jan 23 06:53:40 crc kubenswrapper[4937]: I0123 06:53:40.617411 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79db9b68b9-j4qgx"] Jan 23 06:53:41 crc kubenswrapper[4937]: I0123 06:53:41.487255 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" event={"ID":"73fc9957-a295-493b-8c78-3cb99b56584c","Type":"ContainerStarted","Data":"07790255d7affadbd5bc58eaa6484e37b5ebc83d4bb0f9fbd08c3fe79fbba08c"} Jan 23 06:53:41 crc kubenswrapper[4937]: I0123 06:53:41.678046 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4bk5m"] Jan 23 06:53:41 crc kubenswrapper[4937]: I0123 06:53:41.679319 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4bk5m" Jan 23 06:53:41 crc kubenswrapper[4937]: I0123 06:53:41.681158 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 06:53:41 crc kubenswrapper[4937]: I0123 06:53:41.694201 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4bk5m"] Jan 23 06:53:41 crc kubenswrapper[4937]: I0123 06:53:41.725750 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7820d186-f940-4136-a1cb-ee4437320523-operator-scripts\") pod \"root-account-create-update-4bk5m\" (UID: \"7820d186-f940-4136-a1cb-ee4437320523\") " pod="openstack/root-account-create-update-4bk5m" Jan 23 06:53:41 crc kubenswrapper[4937]: I0123 06:53:41.725822 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phj8v\" (UniqueName: \"kubernetes.io/projected/7820d186-f940-4136-a1cb-ee4437320523-kube-api-access-phj8v\") pod \"root-account-create-update-4bk5m\" (UID: \"7820d186-f940-4136-a1cb-ee4437320523\") " pod="openstack/root-account-create-update-4bk5m" Jan 23 06:53:41 crc kubenswrapper[4937]: I0123 06:53:41.827359 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7820d186-f940-4136-a1cb-ee4437320523-operator-scripts\") pod \"root-account-create-update-4bk5m\" (UID: \"7820d186-f940-4136-a1cb-ee4437320523\") " pod="openstack/root-account-create-update-4bk5m" Jan 23 06:53:41 crc kubenswrapper[4937]: I0123 06:53:41.827452 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phj8v\" (UniqueName: \"kubernetes.io/projected/7820d186-f940-4136-a1cb-ee4437320523-kube-api-access-phj8v\") pod \"root-account-create-update-4bk5m\" (UID: \"7820d186-f940-4136-a1cb-ee4437320523\") " pod="openstack/root-account-create-update-4bk5m" Jan 23 06:53:41 crc kubenswrapper[4937]: I0123 06:53:41.828247 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7820d186-f940-4136-a1cb-ee4437320523-operator-scripts\") pod \"root-account-create-update-4bk5m\" (UID: \"7820d186-f940-4136-a1cb-ee4437320523\") " pod="openstack/root-account-create-update-4bk5m" Jan 23 06:53:41 crc kubenswrapper[4937]: I0123 06:53:41.856586 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phj8v\" (UniqueName: \"kubernetes.io/projected/7820d186-f940-4136-a1cb-ee4437320523-kube-api-access-phj8v\") pod \"root-account-create-update-4bk5m\" (UID: \"7820d186-f940-4136-a1cb-ee4437320523\") " pod="openstack/root-account-create-update-4bk5m" Jan 23 06:53:42 crc kubenswrapper[4937]: I0123 06:53:42.007933 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4bk5m" Jan 23 06:53:42 crc kubenswrapper[4937]: I0123 06:53:42.275834 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4bk5m"] Jan 23 06:53:42 crc kubenswrapper[4937]: W0123 06:53:42.284741 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7820d186_f940_4136_a1cb_ee4437320523.slice/crio-d499726d8bbaa87d11d82c98b4eca724d517ebfbd492d2c2cb8c5b26f093ed3b WatchSource:0}: Error finding container d499726d8bbaa87d11d82c98b4eca724d517ebfbd492d2c2cb8c5b26f093ed3b: Status 404 returned error can't find the container with id d499726d8bbaa87d11d82c98b4eca724d517ebfbd492d2c2cb8c5b26f093ed3b Jan 23 06:53:42 crc kubenswrapper[4937]: I0123 06:53:42.436583 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Jan 23 06:53:42 crc kubenswrapper[4937]: I0123 06:53:42.461675 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.108:5671: connect: connection refused" Jan 23 06:53:42 crc kubenswrapper[4937]: I0123 06:53:42.476173 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="c22daa68-7c34-4180-adcc-d939bfa5a607" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Jan 23 06:53:42 crc kubenswrapper[4937]: I0123 06:53:42.495662 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4bk5m" event={"ID":"7820d186-f940-4136-a1cb-ee4437320523","Type":"ContainerStarted","Data":"d499726d8bbaa87d11d82c98b4eca724d517ebfbd492d2c2cb8c5b26f093ed3b"} Jan 23 06:53:44 crc kubenswrapper[4937]: I0123 06:53:44.534855 4937 generic.go:334] "Generic (PLEG): container finished" podID="73fc9957-a295-493b-8c78-3cb99b56584c" containerID="c3141d1d4847ab115c7dccf6a22b8bd94da943d6d7f66e024bec42daddc13938" exitCode=0 Jan 23 06:53:44 crc kubenswrapper[4937]: I0123 06:53:44.540446 4937 generic.go:334] "Generic (PLEG): container finished" podID="7820d186-f940-4136-a1cb-ee4437320523" containerID="949f654a64ef3088f47025bae16d8082b18c29fdcbe76ff3c6666ade60b8022e" exitCode=0 Jan 23 06:53:44 crc kubenswrapper[4937]: I0123 06:53:44.542229 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" event={"ID":"73fc9957-a295-493b-8c78-3cb99b56584c","Type":"ContainerDied","Data":"c3141d1d4847ab115c7dccf6a22b8bd94da943d6d7f66e024bec42daddc13938"} Jan 23 06:53:44 crc kubenswrapper[4937]: I0123 06:53:44.542282 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4bk5m" event={"ID":"7820d186-f940-4136-a1cb-ee4437320523","Type":"ContainerDied","Data":"949f654a64ef3088f47025bae16d8082b18c29fdcbe76ff3c6666ade60b8022e"} Jan 23 06:53:44 crc kubenswrapper[4937]: I0123 06:53:44.544878 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"721219b7-bb75-49ef-8bbc-09ca3e7eb76f","Type":"ContainerStarted","Data":"7827ff53134da6accbd4ffe9c461824d2e44dc16226690833e115d309e7547e7"} Jan 23 06:53:44 crc kubenswrapper[4937]: I0123 06:53:44.544924 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"721219b7-bb75-49ef-8bbc-09ca3e7eb76f","Type":"ContainerStarted","Data":"a6f431af3f2c20b47757726c1fcd8ad5824a370abd46704b435974847c67a6d8"} Jan 23 06:53:44 crc kubenswrapper[4937]: I0123 06:53:44.624111 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.624090524 podStartE2EDuration="20.624090524s" podCreationTimestamp="2026-01-23 06:53:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:53:44.617870926 +0000 UTC m=+1224.421637579" watchObservedRunningTime="2026-01-23 06:53:44.624090524 +0000 UTC m=+1224.427857167" Jan 23 06:53:44 crc kubenswrapper[4937]: I0123 06:53:44.909374 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:45 crc kubenswrapper[4937]: I0123 06:53:45.553862 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" event={"ID":"73fc9957-a295-493b-8c78-3cb99b56584c","Type":"ContainerStarted","Data":"5c3e4496a879d31751cfdbc31e1cc1167d509dc1097270f4f5cc1e35065a5437"} Jan 23 06:53:45 crc kubenswrapper[4937]: I0123 06:53:45.594780 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" podStartSLOduration=6.59476337 podStartE2EDuration="6.59476337s" podCreationTimestamp="2026-01-23 06:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:53:45.587500794 +0000 UTC m=+1225.391267447" watchObservedRunningTime="2026-01-23 06:53:45.59476337 +0000 UTC m=+1225.398530023" Jan 23 06:53:45 crc kubenswrapper[4937]: I0123 06:53:45.933344 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4bk5m" Jan 23 06:53:46 crc kubenswrapper[4937]: I0123 06:53:46.102995 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phj8v\" (UniqueName: \"kubernetes.io/projected/7820d186-f940-4136-a1cb-ee4437320523-kube-api-access-phj8v\") pod \"7820d186-f940-4136-a1cb-ee4437320523\" (UID: \"7820d186-f940-4136-a1cb-ee4437320523\") " Jan 23 06:53:46 crc kubenswrapper[4937]: I0123 06:53:46.103173 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7820d186-f940-4136-a1cb-ee4437320523-operator-scripts\") pod \"7820d186-f940-4136-a1cb-ee4437320523\" (UID: \"7820d186-f940-4136-a1cb-ee4437320523\") " Jan 23 06:53:46 crc kubenswrapper[4937]: I0123 06:53:46.103704 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7820d186-f940-4136-a1cb-ee4437320523-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7820d186-f940-4136-a1cb-ee4437320523" (UID: "7820d186-f940-4136-a1cb-ee4437320523"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:46 crc kubenswrapper[4937]: I0123 06:53:46.109847 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7820d186-f940-4136-a1cb-ee4437320523-kube-api-access-phj8v" (OuterVolumeSpecName: "kube-api-access-phj8v") pod "7820d186-f940-4136-a1cb-ee4437320523" (UID: "7820d186-f940-4136-a1cb-ee4437320523"). InnerVolumeSpecName "kube-api-access-phj8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:53:46 crc kubenswrapper[4937]: I0123 06:53:46.204901 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phj8v\" (UniqueName: \"kubernetes.io/projected/7820d186-f940-4136-a1cb-ee4437320523-kube-api-access-phj8v\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:46 crc kubenswrapper[4937]: I0123 06:53:46.204943 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7820d186-f940-4136-a1cb-ee4437320523-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:46 crc kubenswrapper[4937]: I0123 06:53:46.564355 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4bk5m" Jan 23 06:53:46 crc kubenswrapper[4937]: I0123 06:53:46.565101 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4bk5m" event={"ID":"7820d186-f940-4136-a1cb-ee4437320523","Type":"ContainerDied","Data":"d499726d8bbaa87d11d82c98b4eca724d517ebfbd492d2c2cb8c5b26f093ed3b"} Jan 23 06:53:46 crc kubenswrapper[4937]: I0123 06:53:46.565126 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d499726d8bbaa87d11d82c98b4eca724d517ebfbd492d2c2cb8c5b26f093ed3b" Jan 23 06:53:46 crc kubenswrapper[4937]: I0123 06:53:46.565145 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:48 crc kubenswrapper[4937]: I0123 06:53:48.083004 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4bk5m"] Jan 23 06:53:48 crc kubenswrapper[4937]: I0123 06:53:48.091041 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4bk5m"] Jan 23 06:53:48 crc kubenswrapper[4937]: I0123 06:53:48.541623 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7820d186-f940-4136-a1cb-ee4437320523" path="/var/lib/kubelet/pods/7820d186-f940-4136-a1cb-ee4437320523/volumes" Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.146790 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.231756 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6b6845ff-5j2pc"] Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.236018 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" podUID="a51817c1-3ea7-4f5e-801b-10c2d04fa99e" containerName="dnsmasq-dns" containerID="cri-o://3d0521cf144ca35080afdf01c18739f5dfc2ffb4f3090cd8cf7a81d72ac93292" gracePeriod=10 Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.621676 4937 generic.go:334] "Generic (PLEG): container finished" podID="a51817c1-3ea7-4f5e-801b-10c2d04fa99e" containerID="3d0521cf144ca35080afdf01c18739f5dfc2ffb4f3090cd8cf7a81d72ac93292" exitCode=0 Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.621768 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" event={"ID":"a51817c1-3ea7-4f5e-801b-10c2d04fa99e","Type":"ContainerDied","Data":"3d0521cf144ca35080afdf01c18739f5dfc2ffb4f3090cd8cf7a81d72ac93292"} Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.787881 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.895399 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-ovsdbserver-sb\") pod \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.895482 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-ovsdbserver-nb\") pod \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.895684 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rdv2\" (UniqueName: \"kubernetes.io/projected/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-kube-api-access-2rdv2\") pod \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.895771 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-dns-svc\") pod \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.895879 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-config\") pod \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\" (UID: \"a51817c1-3ea7-4f5e-801b-10c2d04fa99e\") " Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.904323 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-kube-api-access-2rdv2" (OuterVolumeSpecName: "kube-api-access-2rdv2") pod "a51817c1-3ea7-4f5e-801b-10c2d04fa99e" (UID: "a51817c1-3ea7-4f5e-801b-10c2d04fa99e"). InnerVolumeSpecName "kube-api-access-2rdv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.941150 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a51817c1-3ea7-4f5e-801b-10c2d04fa99e" (UID: "a51817c1-3ea7-4f5e-801b-10c2d04fa99e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.954206 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a51817c1-3ea7-4f5e-801b-10c2d04fa99e" (UID: "a51817c1-3ea7-4f5e-801b-10c2d04fa99e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.956902 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-config" (OuterVolumeSpecName: "config") pod "a51817c1-3ea7-4f5e-801b-10c2d04fa99e" (UID: "a51817c1-3ea7-4f5e-801b-10c2d04fa99e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.964937 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a51817c1-3ea7-4f5e-801b-10c2d04fa99e" (UID: "a51817c1-3ea7-4f5e-801b-10c2d04fa99e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.999787 4937 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.999824 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.999834 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:50 crc kubenswrapper[4937]: I0123 06:53:50.999844 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:50.999854 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rdv2\" (UniqueName: \"kubernetes.io/projected/a51817c1-3ea7-4f5e-801b-10c2d04fa99e-kube-api-access-2rdv2\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.636281 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" event={"ID":"a51817c1-3ea7-4f5e-801b-10c2d04fa99e","Type":"ContainerDied","Data":"3dc3c2c5b8af7dd62d7efbf9ab18fa9a0d6809286ce52e2a1ebb01a014e6a3da"} Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.636347 4937 scope.go:117] "RemoveContainer" containerID="3d0521cf144ca35080afdf01c18739f5dfc2ffb4f3090cd8cf7a81d72ac93292" Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.636359 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6b6845ff-5j2pc" Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.672809 4937 scope.go:117] "RemoveContainer" containerID="f1ba1fbfb564452023f48b86cbe289ab78ea6ff3174f54775aa6bbcf5c5f36e2" Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.677408 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6b6845ff-5j2pc"] Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.698037 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b6b6845ff-5j2pc"] Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.708099 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-t82qx"] Jan 23 06:53:51 crc kubenswrapper[4937]: E0123 06:53:51.708533 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a51817c1-3ea7-4f5e-801b-10c2d04fa99e" containerName="init" Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.708553 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="a51817c1-3ea7-4f5e-801b-10c2d04fa99e" containerName="init" Jan 23 06:53:51 crc kubenswrapper[4937]: E0123 06:53:51.708564 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7820d186-f940-4136-a1cb-ee4437320523" containerName="mariadb-account-create-update" Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.708573 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="7820d186-f940-4136-a1cb-ee4437320523" containerName="mariadb-account-create-update" Jan 23 06:53:51 crc kubenswrapper[4937]: E0123 06:53:51.708621 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a51817c1-3ea7-4f5e-801b-10c2d04fa99e" containerName="dnsmasq-dns" Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.708630 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="a51817c1-3ea7-4f5e-801b-10c2d04fa99e" containerName="dnsmasq-dns" Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.708824 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="7820d186-f940-4136-a1cb-ee4437320523" containerName="mariadb-account-create-update" Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.708854 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="a51817c1-3ea7-4f5e-801b-10c2d04fa99e" containerName="dnsmasq-dns" Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.709541 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t82qx" Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.712944 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.717381 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-t82qx"] Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.818918 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b85842c-f787-4c77-a95e-ca7d84c05240-operator-scripts\") pod \"root-account-create-update-t82qx\" (UID: \"4b85842c-f787-4c77-a95e-ca7d84c05240\") " pod="openstack/root-account-create-update-t82qx" Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.819102 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j5kg\" (UniqueName: \"kubernetes.io/projected/4b85842c-f787-4c77-a95e-ca7d84c05240-kube-api-access-4j5kg\") pod \"root-account-create-update-t82qx\" (UID: \"4b85842c-f787-4c77-a95e-ca7d84c05240\") " pod="openstack/root-account-create-update-t82qx" Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.920887 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j5kg\" (UniqueName: \"kubernetes.io/projected/4b85842c-f787-4c77-a95e-ca7d84c05240-kube-api-access-4j5kg\") pod \"root-account-create-update-t82qx\" (UID: \"4b85842c-f787-4c77-a95e-ca7d84c05240\") " pod="openstack/root-account-create-update-t82qx" Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.921025 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b85842c-f787-4c77-a95e-ca7d84c05240-operator-scripts\") pod \"root-account-create-update-t82qx\" (UID: \"4b85842c-f787-4c77-a95e-ca7d84c05240\") " pod="openstack/root-account-create-update-t82qx" Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.921773 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b85842c-f787-4c77-a95e-ca7d84c05240-operator-scripts\") pod \"root-account-create-update-t82qx\" (UID: \"4b85842c-f787-4c77-a95e-ca7d84c05240\") " pod="openstack/root-account-create-update-t82qx" Jan 23 06:53:51 crc kubenswrapper[4937]: I0123 06:53:51.939623 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j5kg\" (UniqueName: \"kubernetes.io/projected/4b85842c-f787-4c77-a95e-ca7d84c05240-kube-api-access-4j5kg\") pod \"root-account-create-update-t82qx\" (UID: \"4b85842c-f787-4c77-a95e-ca7d84c05240\") " pod="openstack/root-account-create-update-t82qx" Jan 23 06:53:52 crc kubenswrapper[4937]: I0123 06:53:52.050344 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t82qx" Jan 23 06:53:52 crc kubenswrapper[4937]: I0123 06:53:52.437769 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 06:53:52 crc kubenswrapper[4937]: I0123 06:53:52.461887 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-notifications-server-0" Jan 23 06:53:52 crc kubenswrapper[4937]: I0123 06:53:52.514174 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:53:52 crc kubenswrapper[4937]: I0123 06:53:52.545021 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a51817c1-3ea7-4f5e-801b-10c2d04fa99e" path="/var/lib/kubelet/pods/a51817c1-3ea7-4f5e-801b-10c2d04fa99e/volumes" Jan 23 06:53:52 crc kubenswrapper[4937]: I0123 06:53:52.614364 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-t82qx"] Jan 23 06:53:52 crc kubenswrapper[4937]: I0123 06:53:52.681245 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t82qx" event={"ID":"4b85842c-f787-4c77-a95e-ca7d84c05240","Type":"ContainerStarted","Data":"75a60a2ae294feb2e24a1fd453b7fdc9f6166ae598e55ecd4f3412b872ea743e"} Jan 23 06:53:52 crc kubenswrapper[4937]: I0123 06:53:52.859428 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-gq9c7"] Jan 23 06:53:52 crc kubenswrapper[4937]: I0123 06:53:52.860835 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-gq9c7" Jan 23 06:53:52 crc kubenswrapper[4937]: I0123 06:53:52.863570 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-4tn9v" Jan 23 06:53:52 crc kubenswrapper[4937]: I0123 06:53:52.863973 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Jan 23 06:53:52 crc kubenswrapper[4937]: I0123 06:53:52.873753 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-gq9c7"] Jan 23 06:53:52 crc kubenswrapper[4937]: I0123 06:53:52.897897 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-52bq2"] Jan 23 06:53:52 crc kubenswrapper[4937]: I0123 06:53:52.899156 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-52bq2" Jan 23 06:53:52 crc kubenswrapper[4937]: I0123 06:53:52.916633 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-52bq2"] Jan 23 06:53:52 crc kubenswrapper[4937]: I0123 06:53:52.940382 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st5pj\" (UniqueName: \"kubernetes.io/projected/93fa8840-692c-42de-9ee7-90b9a399aff7-kube-api-access-st5pj\") pod \"watcher-db-sync-gq9c7\" (UID: \"93fa8840-692c-42de-9ee7-90b9a399aff7\") " pod="openstack/watcher-db-sync-gq9c7" Jan 23 06:53:52 crc kubenswrapper[4937]: I0123 06:53:52.940434 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93fa8840-692c-42de-9ee7-90b9a399aff7-config-data\") pod \"watcher-db-sync-gq9c7\" (UID: \"93fa8840-692c-42de-9ee7-90b9a399aff7\") " pod="openstack/watcher-db-sync-gq9c7" Jan 23 06:53:52 crc kubenswrapper[4937]: I0123 06:53:52.940461 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93fa8840-692c-42de-9ee7-90b9a399aff7-db-sync-config-data\") pod \"watcher-db-sync-gq9c7\" (UID: \"93fa8840-692c-42de-9ee7-90b9a399aff7\") " pod="openstack/watcher-db-sync-gq9c7" Jan 23 06:53:52 crc kubenswrapper[4937]: I0123 06:53:52.940745 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93fa8840-692c-42de-9ee7-90b9a399aff7-combined-ca-bundle\") pod \"watcher-db-sync-gq9c7\" (UID: \"93fa8840-692c-42de-9ee7-90b9a399aff7\") " pod="openstack/watcher-db-sync-gq9c7" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.038310 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5a85-account-create-update-5ztfz"] Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.039570 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5a85-account-create-update-5ztfz" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.042259 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.052069 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st5pj\" (UniqueName: \"kubernetes.io/projected/93fa8840-692c-42de-9ee7-90b9a399aff7-kube-api-access-st5pj\") pod \"watcher-db-sync-gq9c7\" (UID: \"93fa8840-692c-42de-9ee7-90b9a399aff7\") " pod="openstack/watcher-db-sync-gq9c7" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.052120 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93fa8840-692c-42de-9ee7-90b9a399aff7-config-data\") pod \"watcher-db-sync-gq9c7\" (UID: \"93fa8840-692c-42de-9ee7-90b9a399aff7\") " pod="openstack/watcher-db-sync-gq9c7" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.052148 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93fa8840-692c-42de-9ee7-90b9a399aff7-db-sync-config-data\") pod \"watcher-db-sync-gq9c7\" (UID: \"93fa8840-692c-42de-9ee7-90b9a399aff7\") " pod="openstack/watcher-db-sync-gq9c7" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.052226 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93fa8840-692c-42de-9ee7-90b9a399aff7-combined-ca-bundle\") pod \"watcher-db-sync-gq9c7\" (UID: \"93fa8840-692c-42de-9ee7-90b9a399aff7\") " pod="openstack/watcher-db-sync-gq9c7" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.052272 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acd308e9-2acb-4147-832b-54ef709110b9-operator-scripts\") pod \"barbican-db-create-52bq2\" (UID: \"acd308e9-2acb-4147-832b-54ef709110b9\") " pod="openstack/barbican-db-create-52bq2" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.052295 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph2sg\" (UniqueName: \"kubernetes.io/projected/acd308e9-2acb-4147-832b-54ef709110b9-kube-api-access-ph2sg\") pod \"barbican-db-create-52bq2\" (UID: \"acd308e9-2acb-4147-832b-54ef709110b9\") " pod="openstack/barbican-db-create-52bq2" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.059064 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93fa8840-692c-42de-9ee7-90b9a399aff7-db-sync-config-data\") pod \"watcher-db-sync-gq9c7\" (UID: \"93fa8840-692c-42de-9ee7-90b9a399aff7\") " pod="openstack/watcher-db-sync-gq9c7" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.060067 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93fa8840-692c-42de-9ee7-90b9a399aff7-combined-ca-bundle\") pod \"watcher-db-sync-gq9c7\" (UID: \"93fa8840-692c-42de-9ee7-90b9a399aff7\") " pod="openstack/watcher-db-sync-gq9c7" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.092412 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93fa8840-692c-42de-9ee7-90b9a399aff7-config-data\") pod \"watcher-db-sync-gq9c7\" (UID: \"93fa8840-692c-42de-9ee7-90b9a399aff7\") " pod="openstack/watcher-db-sync-gq9c7" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.109536 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5a85-account-create-update-5ztfz"] Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.112284 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st5pj\" (UniqueName: \"kubernetes.io/projected/93fa8840-692c-42de-9ee7-90b9a399aff7-kube-api-access-st5pj\") pod \"watcher-db-sync-gq9c7\" (UID: \"93fa8840-692c-42de-9ee7-90b9a399aff7\") " pod="openstack/watcher-db-sync-gq9c7" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.127979 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-qj42j"] Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.129089 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qj42j" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.155744 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25d78873-02fa-4d85-acc3-2b16a2a64d1d-operator-scripts\") pod \"barbican-5a85-account-create-update-5ztfz\" (UID: \"25d78873-02fa-4d85-acc3-2b16a2a64d1d\") " pod="openstack/barbican-5a85-account-create-update-5ztfz" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.155846 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acd308e9-2acb-4147-832b-54ef709110b9-operator-scripts\") pod \"barbican-db-create-52bq2\" (UID: \"acd308e9-2acb-4147-832b-54ef709110b9\") " pod="openstack/barbican-db-create-52bq2" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.155876 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph2sg\" (UniqueName: \"kubernetes.io/projected/acd308e9-2acb-4147-832b-54ef709110b9-kube-api-access-ph2sg\") pod \"barbican-db-create-52bq2\" (UID: \"acd308e9-2acb-4147-832b-54ef709110b9\") " pod="openstack/barbican-db-create-52bq2" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.155945 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fxhc\" (UniqueName: \"kubernetes.io/projected/25d78873-02fa-4d85-acc3-2b16a2a64d1d-kube-api-access-8fxhc\") pod \"barbican-5a85-account-create-update-5ztfz\" (UID: \"25d78873-02fa-4d85-acc3-2b16a2a64d1d\") " pod="openstack/barbican-5a85-account-create-update-5ztfz" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.156822 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acd308e9-2acb-4147-832b-54ef709110b9-operator-scripts\") pod \"barbican-db-create-52bq2\" (UID: \"acd308e9-2acb-4147-832b-54ef709110b9\") " pod="openstack/barbican-db-create-52bq2" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.158413 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qj42j"] Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.188666 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-gq9c7" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.216763 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph2sg\" (UniqueName: \"kubernetes.io/projected/acd308e9-2acb-4147-832b-54ef709110b9-kube-api-access-ph2sg\") pod \"barbican-db-create-52bq2\" (UID: \"acd308e9-2acb-4147-832b-54ef709110b9\") " pod="openstack/barbican-db-create-52bq2" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.218942 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-52bq2" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.257157 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/984b99dd-0126-409e-b676-fb8d7c21dac5-operator-scripts\") pod \"cinder-db-create-qj42j\" (UID: \"984b99dd-0126-409e-b676-fb8d7c21dac5\") " pod="openstack/cinder-db-create-qj42j" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.257223 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25d78873-02fa-4d85-acc3-2b16a2a64d1d-operator-scripts\") pod \"barbican-5a85-account-create-update-5ztfz\" (UID: \"25d78873-02fa-4d85-acc3-2b16a2a64d1d\") " pod="openstack/barbican-5a85-account-create-update-5ztfz" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.257303 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs7d4\" (UniqueName: \"kubernetes.io/projected/984b99dd-0126-409e-b676-fb8d7c21dac5-kube-api-access-bs7d4\") pod \"cinder-db-create-qj42j\" (UID: \"984b99dd-0126-409e-b676-fb8d7c21dac5\") " pod="openstack/cinder-db-create-qj42j" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.257327 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fxhc\" (UniqueName: \"kubernetes.io/projected/25d78873-02fa-4d85-acc3-2b16a2a64d1d-kube-api-access-8fxhc\") pod \"barbican-5a85-account-create-update-5ztfz\" (UID: \"25d78873-02fa-4d85-acc3-2b16a2a64d1d\") " pod="openstack/barbican-5a85-account-create-update-5ztfz" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.258263 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25d78873-02fa-4d85-acc3-2b16a2a64d1d-operator-scripts\") pod \"barbican-5a85-account-create-update-5ztfz\" (UID: \"25d78873-02fa-4d85-acc3-2b16a2a64d1d\") " pod="openstack/barbican-5a85-account-create-update-5ztfz" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.293993 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fxhc\" (UniqueName: \"kubernetes.io/projected/25d78873-02fa-4d85-acc3-2b16a2a64d1d-kube-api-access-8fxhc\") pod \"barbican-5a85-account-create-update-5ztfz\" (UID: \"25d78873-02fa-4d85-acc3-2b16a2a64d1d\") " pod="openstack/barbican-5a85-account-create-update-5ztfz" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.339396 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-17d2-account-create-update-xn6kn"] Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.343403 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-17d2-account-create-update-xn6kn" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.356519 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5a85-account-create-update-5ztfz" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.358287 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.360673 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-17d2-account-create-update-xn6kn"] Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.361121 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs7d4\" (UniqueName: \"kubernetes.io/projected/984b99dd-0126-409e-b676-fb8d7c21dac5-kube-api-access-bs7d4\") pod \"cinder-db-create-qj42j\" (UID: \"984b99dd-0126-409e-b676-fb8d7c21dac5\") " pod="openstack/cinder-db-create-qj42j" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.361468 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/984b99dd-0126-409e-b676-fb8d7c21dac5-operator-scripts\") pod \"cinder-db-create-qj42j\" (UID: \"984b99dd-0126-409e-b676-fb8d7c21dac5\") " pod="openstack/cinder-db-create-qj42j" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.362576 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/984b99dd-0126-409e-b676-fb8d7c21dac5-operator-scripts\") pod \"cinder-db-create-qj42j\" (UID: \"984b99dd-0126-409e-b676-fb8d7c21dac5\") " pod="openstack/cinder-db-create-qj42j" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.386240 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-hmfzs"] Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.395921 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hmfzs" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.415533 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mg5cr" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.415633 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.415757 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.416035 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.462771 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2djqq\" (UniqueName: \"kubernetes.io/projected/d2d2864f-0eb5-49f7-82f5-7101a4e794b4-kube-api-access-2djqq\") pod \"cinder-17d2-account-create-update-xn6kn\" (UID: \"d2d2864f-0eb5-49f7-82f5-7101a4e794b4\") " pod="openstack/cinder-17d2-account-create-update-xn6kn" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.463016 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2d2864f-0eb5-49f7-82f5-7101a4e794b4-operator-scripts\") pod \"cinder-17d2-account-create-update-xn6kn\" (UID: \"d2d2864f-0eb5-49f7-82f5-7101a4e794b4\") " pod="openstack/cinder-17d2-account-create-update-xn6kn" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.463099 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ts2t\" (UniqueName: \"kubernetes.io/projected/4d0685ee-4785-4cba-906e-1dc96462dfd8-kube-api-access-6ts2t\") pod \"keystone-db-sync-hmfzs\" (UID: \"4d0685ee-4785-4cba-906e-1dc96462dfd8\") " pod="openstack/keystone-db-sync-hmfzs" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.463417 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d0685ee-4785-4cba-906e-1dc96462dfd8-config-data\") pod \"keystone-db-sync-hmfzs\" (UID: \"4d0685ee-4785-4cba-906e-1dc96462dfd8\") " pod="openstack/keystone-db-sync-hmfzs" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.463446 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d0685ee-4785-4cba-906e-1dc96462dfd8-combined-ca-bundle\") pod \"keystone-db-sync-hmfzs\" (UID: \"4d0685ee-4785-4cba-906e-1dc96462dfd8\") " pod="openstack/keystone-db-sync-hmfzs" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.468640 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs7d4\" (UniqueName: \"kubernetes.io/projected/984b99dd-0126-409e-b676-fb8d7c21dac5-kube-api-access-bs7d4\") pod \"cinder-db-create-qj42j\" (UID: \"984b99dd-0126-409e-b676-fb8d7c21dac5\") " pod="openstack/cinder-db-create-qj42j" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.493794 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-hmfzs"] Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.516967 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qj42j" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.565909 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2djqq\" (UniqueName: \"kubernetes.io/projected/d2d2864f-0eb5-49f7-82f5-7101a4e794b4-kube-api-access-2djqq\") pod \"cinder-17d2-account-create-update-xn6kn\" (UID: \"d2d2864f-0eb5-49f7-82f5-7101a4e794b4\") " pod="openstack/cinder-17d2-account-create-update-xn6kn" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.565958 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2d2864f-0eb5-49f7-82f5-7101a4e794b4-operator-scripts\") pod \"cinder-17d2-account-create-update-xn6kn\" (UID: \"d2d2864f-0eb5-49f7-82f5-7101a4e794b4\") " pod="openstack/cinder-17d2-account-create-update-xn6kn" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.566050 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ts2t\" (UniqueName: \"kubernetes.io/projected/4d0685ee-4785-4cba-906e-1dc96462dfd8-kube-api-access-6ts2t\") pod \"keystone-db-sync-hmfzs\" (UID: \"4d0685ee-4785-4cba-906e-1dc96462dfd8\") " pod="openstack/keystone-db-sync-hmfzs" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.566084 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d0685ee-4785-4cba-906e-1dc96462dfd8-config-data\") pod \"keystone-db-sync-hmfzs\" (UID: \"4d0685ee-4785-4cba-906e-1dc96462dfd8\") " pod="openstack/keystone-db-sync-hmfzs" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.566107 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d0685ee-4785-4cba-906e-1dc96462dfd8-combined-ca-bundle\") pod \"keystone-db-sync-hmfzs\" (UID: \"4d0685ee-4785-4cba-906e-1dc96462dfd8\") " pod="openstack/keystone-db-sync-hmfzs" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.569025 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2d2864f-0eb5-49f7-82f5-7101a4e794b4-operator-scripts\") pod \"cinder-17d2-account-create-update-xn6kn\" (UID: \"d2d2864f-0eb5-49f7-82f5-7101a4e794b4\") " pod="openstack/cinder-17d2-account-create-update-xn6kn" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.571201 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d0685ee-4785-4cba-906e-1dc96462dfd8-combined-ca-bundle\") pod \"keystone-db-sync-hmfzs\" (UID: \"4d0685ee-4785-4cba-906e-1dc96462dfd8\") " pod="openstack/keystone-db-sync-hmfzs" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.582907 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d0685ee-4785-4cba-906e-1dc96462dfd8-config-data\") pod \"keystone-db-sync-hmfzs\" (UID: \"4d0685ee-4785-4cba-906e-1dc96462dfd8\") " pod="openstack/keystone-db-sync-hmfzs" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.603537 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2djqq\" (UniqueName: \"kubernetes.io/projected/d2d2864f-0eb5-49f7-82f5-7101a4e794b4-kube-api-access-2djqq\") pod \"cinder-17d2-account-create-update-xn6kn\" (UID: \"d2d2864f-0eb5-49f7-82f5-7101a4e794b4\") " pod="openstack/cinder-17d2-account-create-update-xn6kn" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.629346 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ts2t\" (UniqueName: \"kubernetes.io/projected/4d0685ee-4785-4cba-906e-1dc96462dfd8-kube-api-access-6ts2t\") pod \"keystone-db-sync-hmfzs\" (UID: \"4d0685ee-4785-4cba-906e-1dc96462dfd8\") " pod="openstack/keystone-db-sync-hmfzs" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.695642 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t82qx" event={"ID":"4b85842c-f787-4c77-a95e-ca7d84c05240","Type":"ContainerStarted","Data":"c60f70e632c7b8c8cd6b41713a5e31fa65aa30046b5713a329baf64d82a031f3"} Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.728265 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-17d2-account-create-update-xn6kn" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.748802 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-t82qx" podStartSLOduration=2.748784597 podStartE2EDuration="2.748784597s" podCreationTimestamp="2026-01-23 06:53:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:53:53.728207269 +0000 UTC m=+1233.531973922" watchObservedRunningTime="2026-01-23 06:53:53.748784597 +0000 UTC m=+1233.552551250" Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.764177 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-gq9c7"] Jan 23 06:53:53 crc kubenswrapper[4937]: I0123 06:53:53.772761 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hmfzs" Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.062636 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-52bq2"] Jan 23 06:53:54 crc kubenswrapper[4937]: W0123 06:53:54.067165 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacd308e9_2acb_4147_832b_54ef709110b9.slice/crio-976c33c5e54830ab731e0e352c3fbb2d2dc57e3f05a1ae827ea04b4c643eaa86 WatchSource:0}: Error finding container 976c33c5e54830ab731e0e352c3fbb2d2dc57e3f05a1ae827ea04b4c643eaa86: Status 404 returned error can't find the container with id 976c33c5e54830ab731e0e352c3fbb2d2dc57e3f05a1ae827ea04b4c643eaa86 Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.125073 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5a85-account-create-update-5ztfz"] Jan 23 06:53:54 crc kubenswrapper[4937]: W0123 06:53:54.156326 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25d78873_02fa_4d85_acc3_2b16a2a64d1d.slice/crio-62ed1939179bd260942535c5bb33634856701352cc8a0fe66e6b778ff24984c8 WatchSource:0}: Error finding container 62ed1939179bd260942535c5bb33634856701352cc8a0fe66e6b778ff24984c8: Status 404 returned error can't find the container with id 62ed1939179bd260942535c5bb33634856701352cc8a0fe66e6b778ff24984c8 Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.254240 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qj42j"] Jan 23 06:53:54 crc kubenswrapper[4937]: W0123 06:53:54.269078 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod984b99dd_0126_409e_b676_fb8d7c21dac5.slice/crio-be4dd8a7358de412a14b35be4dd931ec26ac9454eac7ed75e3f0596c3faec774 WatchSource:0}: Error finding container be4dd8a7358de412a14b35be4dd931ec26ac9454eac7ed75e3f0596c3faec774: Status 404 returned error can't find the container with id be4dd8a7358de412a14b35be4dd931ec26ac9454eac7ed75e3f0596c3faec774 Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.335199 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-hmfzs"] Jan 23 06:53:54 crc kubenswrapper[4937]: W0123 06:53:54.336656 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d0685ee_4785_4cba_906e_1dc96462dfd8.slice/crio-2f7e9a142e043cb5b0898bb8323536ba4a0d76a0aeea565db37d47d30be15981 WatchSource:0}: Error finding container 2f7e9a142e043cb5b0898bb8323536ba4a0d76a0aeea565db37d47d30be15981: Status 404 returned error can't find the container with id 2f7e9a142e043cb5b0898bb8323536ba4a0d76a0aeea565db37d47d30be15981 Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.372510 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-17d2-account-create-update-xn6kn"] Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.706950 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-xkzmc"] Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.709459 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xkzmc" Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.710183 4937 generic.go:334] "Generic (PLEG): container finished" podID="4b85842c-f787-4c77-a95e-ca7d84c05240" containerID="c60f70e632c7b8c8cd6b41713a5e31fa65aa30046b5713a329baf64d82a031f3" exitCode=0 Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.710287 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t82qx" event={"ID":"4b85842c-f787-4c77-a95e-ca7d84c05240","Type":"ContainerDied","Data":"c60f70e632c7b8c8cd6b41713a5e31fa65aa30046b5713a329baf64d82a031f3"} Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.720610 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xkzmc"] Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.720830 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-17d2-account-create-update-xn6kn" event={"ID":"d2d2864f-0eb5-49f7-82f5-7101a4e794b4","Type":"ContainerStarted","Data":"abde4668bb78b731e5df4e213ef69762b8352efd4976f35a6b42d0155fdab9a2"} Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.725984 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-gq9c7" event={"ID":"93fa8840-692c-42de-9ee7-90b9a399aff7","Type":"ContainerStarted","Data":"baa3d322005fe9d7892b292b1bcc21f9af858786699f573694376e505d78bafa"} Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.731443 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5a85-account-create-update-5ztfz" event={"ID":"25d78873-02fa-4d85-acc3-2b16a2a64d1d","Type":"ContainerStarted","Data":"09174f85ccdc72a1ae0351d22088a9f9281ac8a3b21d95fcb7d0cbead915f57e"} Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.731494 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5a85-account-create-update-5ztfz" event={"ID":"25d78873-02fa-4d85-acc3-2b16a2a64d1d","Type":"ContainerStarted","Data":"62ed1939179bd260942535c5bb33634856701352cc8a0fe66e6b778ff24984c8"} Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.733357 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hmfzs" event={"ID":"4d0685ee-4785-4cba-906e-1dc96462dfd8","Type":"ContainerStarted","Data":"2f7e9a142e043cb5b0898bb8323536ba4a0d76a0aeea565db37d47d30be15981"} Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.737106 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-52bq2" event={"ID":"acd308e9-2acb-4147-832b-54ef709110b9","Type":"ContainerStarted","Data":"de60b485dbd6634834b74fb8978b0c9f338cfc7f6e80244eb5fbd6bc8d7aedf2"} Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.737170 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-52bq2" event={"ID":"acd308e9-2acb-4147-832b-54ef709110b9","Type":"ContainerStarted","Data":"976c33c5e54830ab731e0e352c3fbb2d2dc57e3f05a1ae827ea04b4c643eaa86"} Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.751018 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qj42j" event={"ID":"984b99dd-0126-409e-b676-fb8d7c21dac5","Type":"ContainerStarted","Data":"dffc13c4bf8096996071d4d1eb83956201f8e5e118bddfa51a5bad9c6eee6f4f"} Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.751074 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qj42j" event={"ID":"984b99dd-0126-409e-b676-fb8d7c21dac5","Type":"ContainerStarted","Data":"be4dd8a7358de412a14b35be4dd931ec26ac9454eac7ed75e3f0596c3faec774"} Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.787875 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-5a85-account-create-update-5ztfz" podStartSLOduration=2.787857749 podStartE2EDuration="2.787857749s" podCreationTimestamp="2026-01-23 06:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:53:54.77646155 +0000 UTC m=+1234.580228203" watchObservedRunningTime="2026-01-23 06:53:54.787857749 +0000 UTC m=+1234.591624402" Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.799457 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqmx9\" (UniqueName: \"kubernetes.io/projected/59ca4133-99d3-4a83-9f7c-7c033fb97be9-kube-api-access-cqmx9\") pod \"glance-db-create-xkzmc\" (UID: \"59ca4133-99d3-4a83-9f7c-7c033fb97be9\") " pod="openstack/glance-db-create-xkzmc" Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.799505 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59ca4133-99d3-4a83-9f7c-7c033fb97be9-operator-scripts\") pod \"glance-db-create-xkzmc\" (UID: \"59ca4133-99d3-4a83-9f7c-7c033fb97be9\") " pod="openstack/glance-db-create-xkzmc" Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.806395 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-52bq2" podStartSLOduration=2.80635801 podStartE2EDuration="2.80635801s" podCreationTimestamp="2026-01-23 06:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:53:54.800290926 +0000 UTC m=+1234.604057579" watchObservedRunningTime="2026-01-23 06:53:54.80635801 +0000 UTC m=+1234.610124663" Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.835872 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-d8ad-account-create-update-csqmj"] Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.837078 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d8ad-account-create-update-csqmj" Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.840358 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.841774 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-qj42j" podStartSLOduration=1.841758979 podStartE2EDuration="1.841758979s" podCreationTimestamp="2026-01-23 06:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:53:54.818209541 +0000 UTC m=+1234.621976194" watchObservedRunningTime="2026-01-23 06:53:54.841758979 +0000 UTC m=+1234.645525632" Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.860537 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d8ad-account-create-update-csqmj"] Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.904142 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec7f8b93-b708-40a9-ab16-38829a84d0c8-operator-scripts\") pod \"glance-d8ad-account-create-update-csqmj\" (UID: \"ec7f8b93-b708-40a9-ab16-38829a84d0c8\") " pod="openstack/glance-d8ad-account-create-update-csqmj" Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.904239 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqmx9\" (UniqueName: \"kubernetes.io/projected/59ca4133-99d3-4a83-9f7c-7c033fb97be9-kube-api-access-cqmx9\") pod \"glance-db-create-xkzmc\" (UID: \"59ca4133-99d3-4a83-9f7c-7c033fb97be9\") " pod="openstack/glance-db-create-xkzmc" Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.904267 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms2qs\" (UniqueName: \"kubernetes.io/projected/ec7f8b93-b708-40a9-ab16-38829a84d0c8-kube-api-access-ms2qs\") pod \"glance-d8ad-account-create-update-csqmj\" (UID: \"ec7f8b93-b708-40a9-ab16-38829a84d0c8\") " pod="openstack/glance-d8ad-account-create-update-csqmj" Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.904289 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59ca4133-99d3-4a83-9f7c-7c033fb97be9-operator-scripts\") pod \"glance-db-create-xkzmc\" (UID: \"59ca4133-99d3-4a83-9f7c-7c033fb97be9\") " pod="openstack/glance-db-create-xkzmc" Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.906490 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59ca4133-99d3-4a83-9f7c-7c033fb97be9-operator-scripts\") pod \"glance-db-create-xkzmc\" (UID: \"59ca4133-99d3-4a83-9f7c-7c033fb97be9\") " pod="openstack/glance-db-create-xkzmc" Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.909063 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.915098 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:54 crc kubenswrapper[4937]: I0123 06:53:54.935616 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqmx9\" (UniqueName: \"kubernetes.io/projected/59ca4133-99d3-4a83-9f7c-7c033fb97be9-kube-api-access-cqmx9\") pod \"glance-db-create-xkzmc\" (UID: \"59ca4133-99d3-4a83-9f7c-7c033fb97be9\") " pod="openstack/glance-db-create-xkzmc" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.006770 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec7f8b93-b708-40a9-ab16-38829a84d0c8-operator-scripts\") pod \"glance-d8ad-account-create-update-csqmj\" (UID: \"ec7f8b93-b708-40a9-ab16-38829a84d0c8\") " pod="openstack/glance-d8ad-account-create-update-csqmj" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.006889 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms2qs\" (UniqueName: \"kubernetes.io/projected/ec7f8b93-b708-40a9-ab16-38829a84d0c8-kube-api-access-ms2qs\") pod \"glance-d8ad-account-create-update-csqmj\" (UID: \"ec7f8b93-b708-40a9-ab16-38829a84d0c8\") " pod="openstack/glance-d8ad-account-create-update-csqmj" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.007834 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec7f8b93-b708-40a9-ab16-38829a84d0c8-operator-scripts\") pod \"glance-d8ad-account-create-update-csqmj\" (UID: \"ec7f8b93-b708-40a9-ab16-38829a84d0c8\") " pod="openstack/glance-d8ad-account-create-update-csqmj" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.011822 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d9e4-account-create-update-5v6g8"] Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.013194 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d9e4-account-create-update-5v6g8" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.015495 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.028524 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-sw22l"] Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.031508 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-sw22l" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.037221 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms2qs\" (UniqueName: \"kubernetes.io/projected/ec7f8b93-b708-40a9-ab16-38829a84d0c8-kube-api-access-ms2qs\") pod \"glance-d8ad-account-create-update-csqmj\" (UID: \"ec7f8b93-b708-40a9-ab16-38829a84d0c8\") " pod="openstack/glance-d8ad-account-create-update-csqmj" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.055707 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xkzmc" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.112749 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d9e4-account-create-update-5v6g8"] Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.113551 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1bd9db7-c05b-439f-b15c-65c07379eed1-operator-scripts\") pod \"neutron-db-create-sw22l\" (UID: \"c1bd9db7-c05b-439f-b15c-65c07379eed1\") " pod="openstack/neutron-db-create-sw22l" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.113655 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9srk\" (UniqueName: \"kubernetes.io/projected/f42c87e6-6330-47b7-bc7c-29a4d469f783-kube-api-access-n9srk\") pod \"neutron-d9e4-account-create-update-5v6g8\" (UID: \"f42c87e6-6330-47b7-bc7c-29a4d469f783\") " pod="openstack/neutron-d9e4-account-create-update-5v6g8" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.113691 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f42c87e6-6330-47b7-bc7c-29a4d469f783-operator-scripts\") pod \"neutron-d9e4-account-create-update-5v6g8\" (UID: \"f42c87e6-6330-47b7-bc7c-29a4d469f783\") " pod="openstack/neutron-d9e4-account-create-update-5v6g8" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.113762 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbj4s\" (UniqueName: \"kubernetes.io/projected/c1bd9db7-c05b-439f-b15c-65c07379eed1-kube-api-access-nbj4s\") pod \"neutron-db-create-sw22l\" (UID: \"c1bd9db7-c05b-439f-b15c-65c07379eed1\") " pod="openstack/neutron-db-create-sw22l" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.126150 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-sw22l"] Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.165792 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d8ad-account-create-update-csqmj" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.216089 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbj4s\" (UniqueName: \"kubernetes.io/projected/c1bd9db7-c05b-439f-b15c-65c07379eed1-kube-api-access-nbj4s\") pod \"neutron-db-create-sw22l\" (UID: \"c1bd9db7-c05b-439f-b15c-65c07379eed1\") " pod="openstack/neutron-db-create-sw22l" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.216764 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1bd9db7-c05b-439f-b15c-65c07379eed1-operator-scripts\") pod \"neutron-db-create-sw22l\" (UID: \"c1bd9db7-c05b-439f-b15c-65c07379eed1\") " pod="openstack/neutron-db-create-sw22l" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.216857 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9srk\" (UniqueName: \"kubernetes.io/projected/f42c87e6-6330-47b7-bc7c-29a4d469f783-kube-api-access-n9srk\") pod \"neutron-d9e4-account-create-update-5v6g8\" (UID: \"f42c87e6-6330-47b7-bc7c-29a4d469f783\") " pod="openstack/neutron-d9e4-account-create-update-5v6g8" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.216905 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f42c87e6-6330-47b7-bc7c-29a4d469f783-operator-scripts\") pod \"neutron-d9e4-account-create-update-5v6g8\" (UID: \"f42c87e6-6330-47b7-bc7c-29a4d469f783\") " pod="openstack/neutron-d9e4-account-create-update-5v6g8" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.218237 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1bd9db7-c05b-439f-b15c-65c07379eed1-operator-scripts\") pod \"neutron-db-create-sw22l\" (UID: \"c1bd9db7-c05b-439f-b15c-65c07379eed1\") " pod="openstack/neutron-db-create-sw22l" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.218283 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f42c87e6-6330-47b7-bc7c-29a4d469f783-operator-scripts\") pod \"neutron-d9e4-account-create-update-5v6g8\" (UID: \"f42c87e6-6330-47b7-bc7c-29a4d469f783\") " pod="openstack/neutron-d9e4-account-create-update-5v6g8" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.253807 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbj4s\" (UniqueName: \"kubernetes.io/projected/c1bd9db7-c05b-439f-b15c-65c07379eed1-kube-api-access-nbj4s\") pod \"neutron-db-create-sw22l\" (UID: \"c1bd9db7-c05b-439f-b15c-65c07379eed1\") " pod="openstack/neutron-db-create-sw22l" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.253913 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9srk\" (UniqueName: \"kubernetes.io/projected/f42c87e6-6330-47b7-bc7c-29a4d469f783-kube-api-access-n9srk\") pod \"neutron-d9e4-account-create-update-5v6g8\" (UID: \"f42c87e6-6330-47b7-bc7c-29a4d469f783\") " pod="openstack/neutron-d9e4-account-create-update-5v6g8" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.440855 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d9e4-account-create-update-5v6g8" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.461498 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-sw22l" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.591552 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xkzmc"] Jan 23 06:53:55 crc kubenswrapper[4937]: W0123 06:53:55.600502 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59ca4133_99d3_4a83_9f7c_7c033fb97be9.slice/crio-7013324e35641eb98c4f767cae3aa0e80b17d91b7f995bc9effafccaa3771ca3 WatchSource:0}: Error finding container 7013324e35641eb98c4f767cae3aa0e80b17d91b7f995bc9effafccaa3771ca3: Status 404 returned error can't find the container with id 7013324e35641eb98c4f767cae3aa0e80b17d91b7f995bc9effafccaa3771ca3 Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.768514 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xkzmc" event={"ID":"59ca4133-99d3-4a83-9f7c-7c033fb97be9","Type":"ContainerStarted","Data":"7013324e35641eb98c4f767cae3aa0e80b17d91b7f995bc9effafccaa3771ca3"} Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.788394 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-17d2-account-create-update-xn6kn" event={"ID":"d2d2864f-0eb5-49f7-82f5-7101a4e794b4","Type":"ContainerStarted","Data":"55130bd7f8b8cecacc529d513613abdcac6d6b6b3df67d95735ae5ff728e8d08"} Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.831796 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.892292 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-17d2-account-create-update-xn6kn" podStartSLOduration=2.89225695 podStartE2EDuration="2.89225695s" podCreationTimestamp="2026-01-23 06:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:53:55.833039467 +0000 UTC m=+1235.636806120" watchObservedRunningTime="2026-01-23 06:53:55.89225695 +0000 UTC m=+1235.696023603" Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.897637 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d8ad-account-create-update-csqmj"] Jan 23 06:53:55 crc kubenswrapper[4937]: I0123 06:53:55.991965 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-sw22l"] Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.089312 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d9e4-account-create-update-5v6g8"] Jan 23 06:53:56 crc kubenswrapper[4937]: W0123 06:53:56.134382 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf42c87e6_6330_47b7_bc7c_29a4d469f783.slice/crio-14a46a14742c4e91df30ed3a97565607984aa2dbc0f79613436c665f7fb7d57f WatchSource:0}: Error finding container 14a46a14742c4e91df30ed3a97565607984aa2dbc0f79613436c665f7fb7d57f: Status 404 returned error can't find the container with id 14a46a14742c4e91df30ed3a97565607984aa2dbc0f79613436c665f7fb7d57f Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.310797 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t82qx" Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.355306 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b85842c-f787-4c77-a95e-ca7d84c05240-operator-scripts\") pod \"4b85842c-f787-4c77-a95e-ca7d84c05240\" (UID: \"4b85842c-f787-4c77-a95e-ca7d84c05240\") " Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.355405 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j5kg\" (UniqueName: \"kubernetes.io/projected/4b85842c-f787-4c77-a95e-ca7d84c05240-kube-api-access-4j5kg\") pod \"4b85842c-f787-4c77-a95e-ca7d84c05240\" (UID: \"4b85842c-f787-4c77-a95e-ca7d84c05240\") " Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.355882 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b85842c-f787-4c77-a95e-ca7d84c05240-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b85842c-f787-4c77-a95e-ca7d84c05240" (UID: "4b85842c-f787-4c77-a95e-ca7d84c05240"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.361870 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b85842c-f787-4c77-a95e-ca7d84c05240-kube-api-access-4j5kg" (OuterVolumeSpecName: "kube-api-access-4j5kg") pod "4b85842c-f787-4c77-a95e-ca7d84c05240" (UID: "4b85842c-f787-4c77-a95e-ca7d84c05240"). InnerVolumeSpecName "kube-api-access-4j5kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.458088 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j5kg\" (UniqueName: \"kubernetes.io/projected/4b85842c-f787-4c77-a95e-ca7d84c05240-kube-api-access-4j5kg\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.458124 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b85842c-f787-4c77-a95e-ca7d84c05240-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.800023 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t82qx" event={"ID":"4b85842c-f787-4c77-a95e-ca7d84c05240","Type":"ContainerDied","Data":"75a60a2ae294feb2e24a1fd453b7fdc9f6166ae598e55ecd4f3412b872ea743e"} Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.800400 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75a60a2ae294feb2e24a1fd453b7fdc9f6166ae598e55ecd4f3412b872ea743e" Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.800475 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t82qx" Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.804127 4937 generic.go:334] "Generic (PLEG): container finished" podID="25d78873-02fa-4d85-acc3-2b16a2a64d1d" containerID="09174f85ccdc72a1ae0351d22088a9f9281ac8a3b21d95fcb7d0cbead915f57e" exitCode=0 Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.804190 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5a85-account-create-update-5ztfz" event={"ID":"25d78873-02fa-4d85-acc3-2b16a2a64d1d","Type":"ContainerDied","Data":"09174f85ccdc72a1ae0351d22088a9f9281ac8a3b21d95fcb7d0cbead915f57e"} Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.806029 4937 generic.go:334] "Generic (PLEG): container finished" podID="59ca4133-99d3-4a83-9f7c-7c033fb97be9" containerID="e692c9efcdc01b568ddb350ff63f9e54d158272e124bc8505fc6b438247d9fb1" exitCode=0 Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.806118 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xkzmc" event={"ID":"59ca4133-99d3-4a83-9f7c-7c033fb97be9","Type":"ContainerDied","Data":"e692c9efcdc01b568ddb350ff63f9e54d158272e124bc8505fc6b438247d9fb1"} Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.809201 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d9e4-account-create-update-5v6g8" event={"ID":"f42c87e6-6330-47b7-bc7c-29a4d469f783","Type":"ContainerStarted","Data":"10ec3343b9128a3503a13bfc72b8b41afea597c1ab7737b33dda9e1f6a6b6a12"} Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.809233 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d9e4-account-create-update-5v6g8" event={"ID":"f42c87e6-6330-47b7-bc7c-29a4d469f783","Type":"ContainerStarted","Data":"14a46a14742c4e91df30ed3a97565607984aa2dbc0f79613436c665f7fb7d57f"} Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.811380 4937 generic.go:334] "Generic (PLEG): container finished" podID="ec7f8b93-b708-40a9-ab16-38829a84d0c8" containerID="d317ec48fc97a8c1103f439a86a4d038cbabbad6968c865962a23f624856cd2b" exitCode=0 Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.811483 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d8ad-account-create-update-csqmj" event={"ID":"ec7f8b93-b708-40a9-ab16-38829a84d0c8","Type":"ContainerDied","Data":"d317ec48fc97a8c1103f439a86a4d038cbabbad6968c865962a23f624856cd2b"} Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.811547 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d8ad-account-create-update-csqmj" event={"ID":"ec7f8b93-b708-40a9-ab16-38829a84d0c8","Type":"ContainerStarted","Data":"c0eb6be7dbf0a1806f4c0de1ea738eeaf11b3d228a8fb07b64146c8a1f83511f"} Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.813016 4937 generic.go:334] "Generic (PLEG): container finished" podID="d2d2864f-0eb5-49f7-82f5-7101a4e794b4" containerID="55130bd7f8b8cecacc529d513613abdcac6d6b6b3df67d95735ae5ff728e8d08" exitCode=0 Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.813138 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-17d2-account-create-update-xn6kn" event={"ID":"d2d2864f-0eb5-49f7-82f5-7101a4e794b4","Type":"ContainerDied","Data":"55130bd7f8b8cecacc529d513613abdcac6d6b6b3df67d95735ae5ff728e8d08"} Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.814500 4937 generic.go:334] "Generic (PLEG): container finished" podID="acd308e9-2acb-4147-832b-54ef709110b9" containerID="de60b485dbd6634834b74fb8978b0c9f338cfc7f6e80244eb5fbd6bc8d7aedf2" exitCode=0 Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.814612 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-52bq2" event={"ID":"acd308e9-2acb-4147-832b-54ef709110b9","Type":"ContainerDied","Data":"de60b485dbd6634834b74fb8978b0c9f338cfc7f6e80244eb5fbd6bc8d7aedf2"} Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.815888 4937 generic.go:334] "Generic (PLEG): container finished" podID="c1bd9db7-c05b-439f-b15c-65c07379eed1" containerID="376b59faf6b28db368074e5a711f7e5fb359eae800eb9a1d528ae5d1e44899d2" exitCode=0 Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.815982 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-sw22l" event={"ID":"c1bd9db7-c05b-439f-b15c-65c07379eed1","Type":"ContainerDied","Data":"376b59faf6b28db368074e5a711f7e5fb359eae800eb9a1d528ae5d1e44899d2"} Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.816065 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-sw22l" event={"ID":"c1bd9db7-c05b-439f-b15c-65c07379eed1","Type":"ContainerStarted","Data":"12e2ba93fa34050dcaa380b474a4474ce5cabde1ab3a7477e40668e3365eb5b7"} Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.821959 4937 generic.go:334] "Generic (PLEG): container finished" podID="984b99dd-0126-409e-b676-fb8d7c21dac5" containerID="dffc13c4bf8096996071d4d1eb83956201f8e5e118bddfa51a5bad9c6eee6f4f" exitCode=0 Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.822914 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qj42j" event={"ID":"984b99dd-0126-409e-b676-fb8d7c21dac5","Type":"ContainerDied","Data":"dffc13c4bf8096996071d4d1eb83956201f8e5e118bddfa51a5bad9c6eee6f4f"} Jan 23 06:53:56 crc kubenswrapper[4937]: I0123 06:53:56.863064 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d9e4-account-create-update-5v6g8" podStartSLOduration=2.863043233 podStartE2EDuration="2.863043233s" podCreationTimestamp="2026-01-23 06:53:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:53:56.859072626 +0000 UTC m=+1236.662839279" watchObservedRunningTime="2026-01-23 06:53:56.863043233 +0000 UTC m=+1236.666809886" Jan 23 06:53:57 crc kubenswrapper[4937]: I0123 06:53:57.838502 4937 generic.go:334] "Generic (PLEG): container finished" podID="f42c87e6-6330-47b7-bc7c-29a4d469f783" containerID="10ec3343b9128a3503a13bfc72b8b41afea597c1ab7737b33dda9e1f6a6b6a12" exitCode=0 Jan 23 06:53:57 crc kubenswrapper[4937]: I0123 06:53:57.838642 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d9e4-account-create-update-5v6g8" event={"ID":"f42c87e6-6330-47b7-bc7c-29a4d469f783","Type":"ContainerDied","Data":"10ec3343b9128a3503a13bfc72b8b41afea597c1ab7737b33dda9e1f6a6b6a12"} Jan 23 06:53:58 crc kubenswrapper[4937]: I0123 06:53:58.423639 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-t82qx"] Jan 23 06:53:58 crc kubenswrapper[4937]: I0123 06:53:58.430655 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-t82qx"] Jan 23 06:53:58 crc kubenswrapper[4937]: I0123 06:53:58.544864 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b85842c-f787-4c77-a95e-ca7d84c05240" path="/var/lib/kubelet/pods/4b85842c-f787-4c77-a95e-ca7d84c05240/volumes" Jan 23 06:54:01 crc kubenswrapper[4937]: I0123 06:54:01.717562 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-blcnj"] Jan 23 06:54:01 crc kubenswrapper[4937]: E0123 06:54:01.718786 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b85842c-f787-4c77-a95e-ca7d84c05240" containerName="mariadb-account-create-update" Jan 23 06:54:01 crc kubenswrapper[4937]: I0123 06:54:01.718802 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b85842c-f787-4c77-a95e-ca7d84c05240" containerName="mariadb-account-create-update" Jan 23 06:54:01 crc kubenswrapper[4937]: I0123 06:54:01.719012 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b85842c-f787-4c77-a95e-ca7d84c05240" containerName="mariadb-account-create-update" Jan 23 06:54:01 crc kubenswrapper[4937]: I0123 06:54:01.719580 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-blcnj" Jan 23 06:54:01 crc kubenswrapper[4937]: I0123 06:54:01.722248 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 06:54:01 crc kubenswrapper[4937]: I0123 06:54:01.728537 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-blcnj"] Jan 23 06:54:01 crc kubenswrapper[4937]: I0123 06:54:01.772761 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfe36f0e-88d4-4103-afe6-4901ae147853-operator-scripts\") pod \"root-account-create-update-blcnj\" (UID: \"bfe36f0e-88d4-4103-afe6-4901ae147853\") " pod="openstack/root-account-create-update-blcnj" Jan 23 06:54:01 crc kubenswrapper[4937]: I0123 06:54:01.772815 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz7jg\" (UniqueName: \"kubernetes.io/projected/bfe36f0e-88d4-4103-afe6-4901ae147853-kube-api-access-kz7jg\") pod \"root-account-create-update-blcnj\" (UID: \"bfe36f0e-88d4-4103-afe6-4901ae147853\") " pod="openstack/root-account-create-update-blcnj" Jan 23 06:54:01 crc kubenswrapper[4937]: I0123 06:54:01.874766 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfe36f0e-88d4-4103-afe6-4901ae147853-operator-scripts\") pod \"root-account-create-update-blcnj\" (UID: \"bfe36f0e-88d4-4103-afe6-4901ae147853\") " pod="openstack/root-account-create-update-blcnj" Jan 23 06:54:01 crc kubenswrapper[4937]: I0123 06:54:01.874897 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz7jg\" (UniqueName: \"kubernetes.io/projected/bfe36f0e-88d4-4103-afe6-4901ae147853-kube-api-access-kz7jg\") pod \"root-account-create-update-blcnj\" (UID: \"bfe36f0e-88d4-4103-afe6-4901ae147853\") " pod="openstack/root-account-create-update-blcnj" Jan 23 06:54:01 crc kubenswrapper[4937]: I0123 06:54:01.888305 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfe36f0e-88d4-4103-afe6-4901ae147853-operator-scripts\") pod \"root-account-create-update-blcnj\" (UID: \"bfe36f0e-88d4-4103-afe6-4901ae147853\") " pod="openstack/root-account-create-update-blcnj" Jan 23 06:54:01 crc kubenswrapper[4937]: I0123 06:54:01.905256 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz7jg\" (UniqueName: \"kubernetes.io/projected/bfe36f0e-88d4-4103-afe6-4901ae147853-kube-api-access-kz7jg\") pod \"root-account-create-update-blcnj\" (UID: \"bfe36f0e-88d4-4103-afe6-4901ae147853\") " pod="openstack/root-account-create-update-blcnj" Jan 23 06:54:02 crc kubenswrapper[4937]: I0123 06:54:02.052631 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-blcnj" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.426278 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xkzmc" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.457299 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qj42j" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.469393 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d9e4-account-create-update-5v6g8" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.504437 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-17d2-account-create-update-xn6kn" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.522561 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-52bq2" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.530154 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs7d4\" (UniqueName: \"kubernetes.io/projected/984b99dd-0126-409e-b676-fb8d7c21dac5-kube-api-access-bs7d4\") pod \"984b99dd-0126-409e-b676-fb8d7c21dac5\" (UID: \"984b99dd-0126-409e-b676-fb8d7c21dac5\") " Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.531272 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/984b99dd-0126-409e-b676-fb8d7c21dac5-operator-scripts\") pod \"984b99dd-0126-409e-b676-fb8d7c21dac5\" (UID: \"984b99dd-0126-409e-b676-fb8d7c21dac5\") " Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.531335 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9srk\" (UniqueName: \"kubernetes.io/projected/f42c87e6-6330-47b7-bc7c-29a4d469f783-kube-api-access-n9srk\") pod \"f42c87e6-6330-47b7-bc7c-29a4d469f783\" (UID: \"f42c87e6-6330-47b7-bc7c-29a4d469f783\") " Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.531353 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f42c87e6-6330-47b7-bc7c-29a4d469f783-operator-scripts\") pod \"f42c87e6-6330-47b7-bc7c-29a4d469f783\" (UID: \"f42c87e6-6330-47b7-bc7c-29a4d469f783\") " Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.531384 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqmx9\" (UniqueName: \"kubernetes.io/projected/59ca4133-99d3-4a83-9f7c-7c033fb97be9-kube-api-access-cqmx9\") pod \"59ca4133-99d3-4a83-9f7c-7c033fb97be9\" (UID: \"59ca4133-99d3-4a83-9f7c-7c033fb97be9\") " Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.531467 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59ca4133-99d3-4a83-9f7c-7c033fb97be9-operator-scripts\") pod \"59ca4133-99d3-4a83-9f7c-7c033fb97be9\" (UID: \"59ca4133-99d3-4a83-9f7c-7c033fb97be9\") " Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.531853 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d8ad-account-create-update-csqmj" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.532787 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59ca4133-99d3-4a83-9f7c-7c033fb97be9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59ca4133-99d3-4a83-9f7c-7c033fb97be9" (UID: "59ca4133-99d3-4a83-9f7c-7c033fb97be9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.533561 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f42c87e6-6330-47b7-bc7c-29a4d469f783-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f42c87e6-6330-47b7-bc7c-29a4d469f783" (UID: "f42c87e6-6330-47b7-bc7c-29a4d469f783"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.532325 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/984b99dd-0126-409e-b676-fb8d7c21dac5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "984b99dd-0126-409e-b676-fb8d7c21dac5" (UID: "984b99dd-0126-409e-b676-fb8d7c21dac5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.539872 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59ca4133-99d3-4a83-9f7c-7c033fb97be9-kube-api-access-cqmx9" (OuterVolumeSpecName: "kube-api-access-cqmx9") pod "59ca4133-99d3-4a83-9f7c-7c033fb97be9" (UID: "59ca4133-99d3-4a83-9f7c-7c033fb97be9"). InnerVolumeSpecName "kube-api-access-cqmx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.539927 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f42c87e6-6330-47b7-bc7c-29a4d469f783-kube-api-access-n9srk" (OuterVolumeSpecName: "kube-api-access-n9srk") pod "f42c87e6-6330-47b7-bc7c-29a4d469f783" (UID: "f42c87e6-6330-47b7-bc7c-29a4d469f783"). InnerVolumeSpecName "kube-api-access-n9srk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.548794 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/984b99dd-0126-409e-b676-fb8d7c21dac5-kube-api-access-bs7d4" (OuterVolumeSpecName: "kube-api-access-bs7d4") pod "984b99dd-0126-409e-b676-fb8d7c21dac5" (UID: "984b99dd-0126-409e-b676-fb8d7c21dac5"). InnerVolumeSpecName "kube-api-access-bs7d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.606787 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-blcnj"] Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.613312 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5a85-account-create-update-5ztfz" Jan 23 06:54:07 crc kubenswrapper[4937]: W0123 06:54:07.616522 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfe36f0e_88d4_4103_afe6_4901ae147853.slice/crio-e8cc3b1b58d560b3a011a3cafe8e5cdc909a8106b28b995f248cdb070b3716ed WatchSource:0}: Error finding container e8cc3b1b58d560b3a011a3cafe8e5cdc909a8106b28b995f248cdb070b3716ed: Status 404 returned error can't find the container with id e8cc3b1b58d560b3a011a3cafe8e5cdc909a8106b28b995f248cdb070b3716ed Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.618633 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-sw22l" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.635328 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2djqq\" (UniqueName: \"kubernetes.io/projected/d2d2864f-0eb5-49f7-82f5-7101a4e794b4-kube-api-access-2djqq\") pod \"d2d2864f-0eb5-49f7-82f5-7101a4e794b4\" (UID: \"d2d2864f-0eb5-49f7-82f5-7101a4e794b4\") " Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.635419 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec7f8b93-b708-40a9-ab16-38829a84d0c8-operator-scripts\") pod \"ec7f8b93-b708-40a9-ab16-38829a84d0c8\" (UID: \"ec7f8b93-b708-40a9-ab16-38829a84d0c8\") " Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.635483 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2d2864f-0eb5-49f7-82f5-7101a4e794b4-operator-scripts\") pod \"d2d2864f-0eb5-49f7-82f5-7101a4e794b4\" (UID: \"d2d2864f-0eb5-49f7-82f5-7101a4e794b4\") " Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.635524 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms2qs\" (UniqueName: \"kubernetes.io/projected/ec7f8b93-b708-40a9-ab16-38829a84d0c8-kube-api-access-ms2qs\") pod \"ec7f8b93-b708-40a9-ab16-38829a84d0c8\" (UID: \"ec7f8b93-b708-40a9-ab16-38829a84d0c8\") " Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.638762 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acd308e9-2acb-4147-832b-54ef709110b9-operator-scripts\") pod \"acd308e9-2acb-4147-832b-54ef709110b9\" (UID: \"acd308e9-2acb-4147-832b-54ef709110b9\") " Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.639266 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec7f8b93-b708-40a9-ab16-38829a84d0c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec7f8b93-b708-40a9-ab16-38829a84d0c8" (UID: "ec7f8b93-b708-40a9-ab16-38829a84d0c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.639553 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2d2864f-0eb5-49f7-82f5-7101a4e794b4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d2d2864f-0eb5-49f7-82f5-7101a4e794b4" (UID: "d2d2864f-0eb5-49f7-82f5-7101a4e794b4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.640073 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph2sg\" (UniqueName: \"kubernetes.io/projected/acd308e9-2acb-4147-832b-54ef709110b9-kube-api-access-ph2sg\") pod \"acd308e9-2acb-4147-832b-54ef709110b9\" (UID: \"acd308e9-2acb-4147-832b-54ef709110b9\") " Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.640968 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd308e9-2acb-4147-832b-54ef709110b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "acd308e9-2acb-4147-832b-54ef709110b9" (UID: "acd308e9-2acb-4147-832b-54ef709110b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.643498 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec7f8b93-b708-40a9-ab16-38829a84d0c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.643541 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs7d4\" (UniqueName: \"kubernetes.io/projected/984b99dd-0126-409e-b676-fb8d7c21dac5-kube-api-access-bs7d4\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.643557 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/984b99dd-0126-409e-b676-fb8d7c21dac5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.643569 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2d2864f-0eb5-49f7-82f5-7101a4e794b4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.643609 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9srk\" (UniqueName: \"kubernetes.io/projected/f42c87e6-6330-47b7-bc7c-29a4d469f783-kube-api-access-n9srk\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.643630 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f42c87e6-6330-47b7-bc7c-29a4d469f783-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.643643 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqmx9\" (UniqueName: \"kubernetes.io/projected/59ca4133-99d3-4a83-9f7c-7c033fb97be9-kube-api-access-cqmx9\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.643657 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/acd308e9-2acb-4147-832b-54ef709110b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.643673 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59ca4133-99d3-4a83-9f7c-7c033fb97be9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.646955 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec7f8b93-b708-40a9-ab16-38829a84d0c8-kube-api-access-ms2qs" (OuterVolumeSpecName: "kube-api-access-ms2qs") pod "ec7f8b93-b708-40a9-ab16-38829a84d0c8" (UID: "ec7f8b93-b708-40a9-ab16-38829a84d0c8"). InnerVolumeSpecName "kube-api-access-ms2qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.647021 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2d2864f-0eb5-49f7-82f5-7101a4e794b4-kube-api-access-2djqq" (OuterVolumeSpecName: "kube-api-access-2djqq") pod "d2d2864f-0eb5-49f7-82f5-7101a4e794b4" (UID: "d2d2864f-0eb5-49f7-82f5-7101a4e794b4"). InnerVolumeSpecName "kube-api-access-2djqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.685878 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd308e9-2acb-4147-832b-54ef709110b9-kube-api-access-ph2sg" (OuterVolumeSpecName: "kube-api-access-ph2sg") pod "acd308e9-2acb-4147-832b-54ef709110b9" (UID: "acd308e9-2acb-4147-832b-54ef709110b9"). InnerVolumeSpecName "kube-api-access-ph2sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.723862 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.723914 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.745258 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbj4s\" (UniqueName: \"kubernetes.io/projected/c1bd9db7-c05b-439f-b15c-65c07379eed1-kube-api-access-nbj4s\") pod \"c1bd9db7-c05b-439f-b15c-65c07379eed1\" (UID: \"c1bd9db7-c05b-439f-b15c-65c07379eed1\") " Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.745357 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fxhc\" (UniqueName: \"kubernetes.io/projected/25d78873-02fa-4d85-acc3-2b16a2a64d1d-kube-api-access-8fxhc\") pod \"25d78873-02fa-4d85-acc3-2b16a2a64d1d\" (UID: \"25d78873-02fa-4d85-acc3-2b16a2a64d1d\") " Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.745404 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1bd9db7-c05b-439f-b15c-65c07379eed1-operator-scripts\") pod \"c1bd9db7-c05b-439f-b15c-65c07379eed1\" (UID: \"c1bd9db7-c05b-439f-b15c-65c07379eed1\") " Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.745477 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25d78873-02fa-4d85-acc3-2b16a2a64d1d-operator-scripts\") pod \"25d78873-02fa-4d85-acc3-2b16a2a64d1d\" (UID: \"25d78873-02fa-4d85-acc3-2b16a2a64d1d\") " Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.745793 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms2qs\" (UniqueName: \"kubernetes.io/projected/ec7f8b93-b708-40a9-ab16-38829a84d0c8-kube-api-access-ms2qs\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.745812 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph2sg\" (UniqueName: \"kubernetes.io/projected/acd308e9-2acb-4147-832b-54ef709110b9-kube-api-access-ph2sg\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.745822 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2djqq\" (UniqueName: \"kubernetes.io/projected/d2d2864f-0eb5-49f7-82f5-7101a4e794b4-kube-api-access-2djqq\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.746502 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25d78873-02fa-4d85-acc3-2b16a2a64d1d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "25d78873-02fa-4d85-acc3-2b16a2a64d1d" (UID: "25d78873-02fa-4d85-acc3-2b16a2a64d1d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.746787 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1bd9db7-c05b-439f-b15c-65c07379eed1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1bd9db7-c05b-439f-b15c-65c07379eed1" (UID: "c1bd9db7-c05b-439f-b15c-65c07379eed1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.748624 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25d78873-02fa-4d85-acc3-2b16a2a64d1d-kube-api-access-8fxhc" (OuterVolumeSpecName: "kube-api-access-8fxhc") pod "25d78873-02fa-4d85-acc3-2b16a2a64d1d" (UID: "25d78873-02fa-4d85-acc3-2b16a2a64d1d"). InnerVolumeSpecName "kube-api-access-8fxhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.748725 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1bd9db7-c05b-439f-b15c-65c07379eed1-kube-api-access-nbj4s" (OuterVolumeSpecName: "kube-api-access-nbj4s") pod "c1bd9db7-c05b-439f-b15c-65c07379eed1" (UID: "c1bd9db7-c05b-439f-b15c-65c07379eed1"). InnerVolumeSpecName "kube-api-access-nbj4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.847430 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbj4s\" (UniqueName: \"kubernetes.io/projected/c1bd9db7-c05b-439f-b15c-65c07379eed1-kube-api-access-nbj4s\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.847468 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fxhc\" (UniqueName: \"kubernetes.io/projected/25d78873-02fa-4d85-acc3-2b16a2a64d1d-kube-api-access-8fxhc\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.847478 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1bd9db7-c05b-439f-b15c-65c07379eed1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.847486 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25d78873-02fa-4d85-acc3-2b16a2a64d1d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.929052 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-17d2-account-create-update-xn6kn" event={"ID":"d2d2864f-0eb5-49f7-82f5-7101a4e794b4","Type":"ContainerDied","Data":"abde4668bb78b731e5df4e213ef69762b8352efd4976f35a6b42d0155fdab9a2"} Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.929115 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-17d2-account-create-update-xn6kn" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.929116 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abde4668bb78b731e5df4e213ef69762b8352efd4976f35a6b42d0155fdab9a2" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.934529 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-52bq2" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.934512 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-52bq2" event={"ID":"acd308e9-2acb-4147-832b-54ef709110b9","Type":"ContainerDied","Data":"976c33c5e54830ab731e0e352c3fbb2d2dc57e3f05a1ae827ea04b4c643eaa86"} Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.934678 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="976c33c5e54830ab731e0e352c3fbb2d2dc57e3f05a1ae827ea04b4c643eaa86" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.936123 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qj42j" event={"ID":"984b99dd-0126-409e-b676-fb8d7c21dac5","Type":"ContainerDied","Data":"be4dd8a7358de412a14b35be4dd931ec26ac9454eac7ed75e3f0596c3faec774"} Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.936152 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be4dd8a7358de412a14b35be4dd931ec26ac9454eac7ed75e3f0596c3faec774" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.936177 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qj42j" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.937426 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xkzmc" event={"ID":"59ca4133-99d3-4a83-9f7c-7c033fb97be9","Type":"ContainerDied","Data":"7013324e35641eb98c4f767cae3aa0e80b17d91b7f995bc9effafccaa3771ca3"} Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.937453 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7013324e35641eb98c4f767cae3aa0e80b17d91b7f995bc9effafccaa3771ca3" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.937519 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xkzmc" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.940510 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d9e4-account-create-update-5v6g8" event={"ID":"f42c87e6-6330-47b7-bc7c-29a4d469f783","Type":"ContainerDied","Data":"14a46a14742c4e91df30ed3a97565607984aa2dbc0f79613436c665f7fb7d57f"} Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.940545 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d9e4-account-create-update-5v6g8" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.940560 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14a46a14742c4e91df30ed3a97565607984aa2dbc0f79613436c665f7fb7d57f" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.942088 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d8ad-account-create-update-csqmj" event={"ID":"ec7f8b93-b708-40a9-ab16-38829a84d0c8","Type":"ContainerDied","Data":"c0eb6be7dbf0a1806f4c0de1ea738eeaf11b3d228a8fb07b64146c8a1f83511f"} Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.942100 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d8ad-account-create-update-csqmj" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.942416 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0eb6be7dbf0a1806f4c0de1ea738eeaf11b3d228a8fb07b64146c8a1f83511f" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.946265 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-gq9c7" event={"ID":"93fa8840-692c-42de-9ee7-90b9a399aff7","Type":"ContainerStarted","Data":"30e9b3c28bc99f50d1eef034a2f9341c33ded7490074ebec69187e9aaeaddd4e"} Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.964169 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5a85-account-create-update-5ztfz" event={"ID":"25d78873-02fa-4d85-acc3-2b16a2a64d1d","Type":"ContainerDied","Data":"62ed1939179bd260942535c5bb33634856701352cc8a0fe66e6b778ff24984c8"} Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.964236 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62ed1939179bd260942535c5bb33634856701352cc8a0fe66e6b778ff24984c8" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.964395 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5a85-account-create-update-5ztfz" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.967756 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-blcnj" event={"ID":"bfe36f0e-88d4-4103-afe6-4901ae147853","Type":"ContainerStarted","Data":"24c4169587c6655d6f1c8b3009a09680b3ad08bc8acab0352ba722016c575f64"} Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.967794 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-blcnj" event={"ID":"bfe36f0e-88d4-4103-afe6-4901ae147853","Type":"ContainerStarted","Data":"e8cc3b1b58d560b3a011a3cafe8e5cdc909a8106b28b995f248cdb070b3716ed"} Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.969058 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-gq9c7" podStartSLOduration=2.565575699 podStartE2EDuration="15.969029088s" podCreationTimestamp="2026-01-23 06:53:52 +0000 UTC" firstStartedPulling="2026-01-23 06:53:53.778895213 +0000 UTC m=+1233.582661866" lastFinishedPulling="2026-01-23 06:54:07.182348602 +0000 UTC m=+1246.986115255" observedRunningTime="2026-01-23 06:54:07.966273003 +0000 UTC m=+1247.770039656" watchObservedRunningTime="2026-01-23 06:54:07.969029088 +0000 UTC m=+1247.772795741" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.971228 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hmfzs" event={"ID":"4d0685ee-4785-4cba-906e-1dc96462dfd8","Type":"ContainerStarted","Data":"d40e35dade476c123ac0b49287dd7cf102fa07d297f806d70e4dea9011467907"} Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.973977 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-sw22l" event={"ID":"c1bd9db7-c05b-439f-b15c-65c07379eed1","Type":"ContainerDied","Data":"12e2ba93fa34050dcaa380b474a4474ce5cabde1ab3a7477e40668e3365eb5b7"} Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.974020 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12e2ba93fa34050dcaa380b474a4474ce5cabde1ab3a7477e40668e3365eb5b7" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.974029 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-sw22l" Jan 23 06:54:07 crc kubenswrapper[4937]: I0123 06:54:07.996711 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-hmfzs" podStartSLOduration=2.168756165 podStartE2EDuration="14.996692597s" podCreationTimestamp="2026-01-23 06:53:53 +0000 UTC" firstStartedPulling="2026-01-23 06:53:54.34039871 +0000 UTC m=+1234.144165363" lastFinishedPulling="2026-01-23 06:54:07.168335142 +0000 UTC m=+1246.972101795" observedRunningTime="2026-01-23 06:54:07.991115676 +0000 UTC m=+1247.794882339" watchObservedRunningTime="2026-01-23 06:54:07.996692597 +0000 UTC m=+1247.800459250" Jan 23 06:54:08 crc kubenswrapper[4937]: I0123 06:54:08.016784 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-blcnj" podStartSLOduration=7.016767701 podStartE2EDuration="7.016767701s" podCreationTimestamp="2026-01-23 06:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:54:08.005620299 +0000 UTC m=+1247.809386952" watchObservedRunningTime="2026-01-23 06:54:08.016767701 +0000 UTC m=+1247.820534354" Jan 23 06:54:08 crc kubenswrapper[4937]: I0123 06:54:08.993729 4937 generic.go:334] "Generic (PLEG): container finished" podID="bfe36f0e-88d4-4103-afe6-4901ae147853" containerID="24c4169587c6655d6f1c8b3009a09680b3ad08bc8acab0352ba722016c575f64" exitCode=0 Jan 23 06:54:08 crc kubenswrapper[4937]: I0123 06:54:08.995086 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-blcnj" event={"ID":"bfe36f0e-88d4-4103-afe6-4901ae147853","Type":"ContainerDied","Data":"24c4169587c6655d6f1c8b3009a09680b3ad08bc8acab0352ba722016c575f64"} Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.986090 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-2n4zb"] Jan 23 06:54:09 crc kubenswrapper[4937]: E0123 06:54:09.986810 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="984b99dd-0126-409e-b676-fb8d7c21dac5" containerName="mariadb-database-create" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.986833 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="984b99dd-0126-409e-b676-fb8d7c21dac5" containerName="mariadb-database-create" Jan 23 06:54:09 crc kubenswrapper[4937]: E0123 06:54:09.986848 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f42c87e6-6330-47b7-bc7c-29a4d469f783" containerName="mariadb-account-create-update" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.986859 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f42c87e6-6330-47b7-bc7c-29a4d469f783" containerName="mariadb-account-create-update" Jan 23 06:54:09 crc kubenswrapper[4937]: E0123 06:54:09.986876 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd308e9-2acb-4147-832b-54ef709110b9" containerName="mariadb-database-create" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.986884 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd308e9-2acb-4147-832b-54ef709110b9" containerName="mariadb-database-create" Jan 23 06:54:09 crc kubenswrapper[4937]: E0123 06:54:09.986899 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1bd9db7-c05b-439f-b15c-65c07379eed1" containerName="mariadb-database-create" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.986906 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1bd9db7-c05b-439f-b15c-65c07379eed1" containerName="mariadb-database-create" Jan 23 06:54:09 crc kubenswrapper[4937]: E0123 06:54:09.986919 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25d78873-02fa-4d85-acc3-2b16a2a64d1d" containerName="mariadb-account-create-update" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.986925 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="25d78873-02fa-4d85-acc3-2b16a2a64d1d" containerName="mariadb-account-create-update" Jan 23 06:54:09 crc kubenswrapper[4937]: E0123 06:54:09.986940 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59ca4133-99d3-4a83-9f7c-7c033fb97be9" containerName="mariadb-database-create" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.986947 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="59ca4133-99d3-4a83-9f7c-7c033fb97be9" containerName="mariadb-database-create" Jan 23 06:54:09 crc kubenswrapper[4937]: E0123 06:54:09.986968 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2d2864f-0eb5-49f7-82f5-7101a4e794b4" containerName="mariadb-account-create-update" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.986976 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2d2864f-0eb5-49f7-82f5-7101a4e794b4" containerName="mariadb-account-create-update" Jan 23 06:54:09 crc kubenswrapper[4937]: E0123 06:54:09.986985 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7f8b93-b708-40a9-ab16-38829a84d0c8" containerName="mariadb-account-create-update" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.986992 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7f8b93-b708-40a9-ab16-38829a84d0c8" containerName="mariadb-account-create-update" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.987198 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="59ca4133-99d3-4a83-9f7c-7c033fb97be9" containerName="mariadb-database-create" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.987239 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec7f8b93-b708-40a9-ab16-38829a84d0c8" containerName="mariadb-account-create-update" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.987254 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2d2864f-0eb5-49f7-82f5-7101a4e794b4" containerName="mariadb-account-create-update" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.987270 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1bd9db7-c05b-439f-b15c-65c07379eed1" containerName="mariadb-database-create" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.987287 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="984b99dd-0126-409e-b676-fb8d7c21dac5" containerName="mariadb-database-create" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.987304 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="25d78873-02fa-4d85-acc3-2b16a2a64d1d" containerName="mariadb-account-create-update" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.987321 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="f42c87e6-6330-47b7-bc7c-29a4d469f783" containerName="mariadb-account-create-update" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.987338 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd308e9-2acb-4147-832b-54ef709110b9" containerName="mariadb-database-create" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.988075 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2n4zb" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.994019 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.994433 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ngf4q" Jan 23 06:54:09 crc kubenswrapper[4937]: I0123 06:54:09.996018 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2n4zb"] Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.089736 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e02394c2-2975-4589-9670-7c69fa89cb1d-combined-ca-bundle\") pod \"glance-db-sync-2n4zb\" (UID: \"e02394c2-2975-4589-9670-7c69fa89cb1d\") " pod="openstack/glance-db-sync-2n4zb" Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.089790 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp4rn\" (UniqueName: \"kubernetes.io/projected/e02394c2-2975-4589-9670-7c69fa89cb1d-kube-api-access-jp4rn\") pod \"glance-db-sync-2n4zb\" (UID: \"e02394c2-2975-4589-9670-7c69fa89cb1d\") " pod="openstack/glance-db-sync-2n4zb" Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.089837 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e02394c2-2975-4589-9670-7c69fa89cb1d-db-sync-config-data\") pod \"glance-db-sync-2n4zb\" (UID: \"e02394c2-2975-4589-9670-7c69fa89cb1d\") " pod="openstack/glance-db-sync-2n4zb" Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.090034 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e02394c2-2975-4589-9670-7c69fa89cb1d-config-data\") pod \"glance-db-sync-2n4zb\" (UID: \"e02394c2-2975-4589-9670-7c69fa89cb1d\") " pod="openstack/glance-db-sync-2n4zb" Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.192092 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e02394c2-2975-4589-9670-7c69fa89cb1d-config-data\") pod \"glance-db-sync-2n4zb\" (UID: \"e02394c2-2975-4589-9670-7c69fa89cb1d\") " pod="openstack/glance-db-sync-2n4zb" Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.192172 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e02394c2-2975-4589-9670-7c69fa89cb1d-combined-ca-bundle\") pod \"glance-db-sync-2n4zb\" (UID: \"e02394c2-2975-4589-9670-7c69fa89cb1d\") " pod="openstack/glance-db-sync-2n4zb" Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.192191 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp4rn\" (UniqueName: \"kubernetes.io/projected/e02394c2-2975-4589-9670-7c69fa89cb1d-kube-api-access-jp4rn\") pod \"glance-db-sync-2n4zb\" (UID: \"e02394c2-2975-4589-9670-7c69fa89cb1d\") " pod="openstack/glance-db-sync-2n4zb" Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.192211 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e02394c2-2975-4589-9670-7c69fa89cb1d-db-sync-config-data\") pod \"glance-db-sync-2n4zb\" (UID: \"e02394c2-2975-4589-9670-7c69fa89cb1d\") " pod="openstack/glance-db-sync-2n4zb" Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.198231 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e02394c2-2975-4589-9670-7c69fa89cb1d-db-sync-config-data\") pod \"glance-db-sync-2n4zb\" (UID: \"e02394c2-2975-4589-9670-7c69fa89cb1d\") " pod="openstack/glance-db-sync-2n4zb" Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.198298 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e02394c2-2975-4589-9670-7c69fa89cb1d-combined-ca-bundle\") pod \"glance-db-sync-2n4zb\" (UID: \"e02394c2-2975-4589-9670-7c69fa89cb1d\") " pod="openstack/glance-db-sync-2n4zb" Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.199238 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e02394c2-2975-4589-9670-7c69fa89cb1d-config-data\") pod \"glance-db-sync-2n4zb\" (UID: \"e02394c2-2975-4589-9670-7c69fa89cb1d\") " pod="openstack/glance-db-sync-2n4zb" Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.211929 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp4rn\" (UniqueName: \"kubernetes.io/projected/e02394c2-2975-4589-9670-7c69fa89cb1d-kube-api-access-jp4rn\") pod \"glance-db-sync-2n4zb\" (UID: \"e02394c2-2975-4589-9670-7c69fa89cb1d\") " pod="openstack/glance-db-sync-2n4zb" Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.312829 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2n4zb" Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.421347 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-blcnj" Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.613907 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfe36f0e-88d4-4103-afe6-4901ae147853-operator-scripts\") pod \"bfe36f0e-88d4-4103-afe6-4901ae147853\" (UID: \"bfe36f0e-88d4-4103-afe6-4901ae147853\") " Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.614042 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz7jg\" (UniqueName: \"kubernetes.io/projected/bfe36f0e-88d4-4103-afe6-4901ae147853-kube-api-access-kz7jg\") pod \"bfe36f0e-88d4-4103-afe6-4901ae147853\" (UID: \"bfe36f0e-88d4-4103-afe6-4901ae147853\") " Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.614731 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfe36f0e-88d4-4103-afe6-4901ae147853-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bfe36f0e-88d4-4103-afe6-4901ae147853" (UID: "bfe36f0e-88d4-4103-afe6-4901ae147853"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.622621 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfe36f0e-88d4-4103-afe6-4901ae147853-kube-api-access-kz7jg" (OuterVolumeSpecName: "kube-api-access-kz7jg") pod "bfe36f0e-88d4-4103-afe6-4901ae147853" (UID: "bfe36f0e-88d4-4103-afe6-4901ae147853"). InnerVolumeSpecName "kube-api-access-kz7jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.717063 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz7jg\" (UniqueName: \"kubernetes.io/projected/bfe36f0e-88d4-4103-afe6-4901ae147853-kube-api-access-kz7jg\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.717309 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfe36f0e-88d4-4103-afe6-4901ae147853-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:10 crc kubenswrapper[4937]: I0123 06:54:10.811146 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2n4zb"] Jan 23 06:54:11 crc kubenswrapper[4937]: I0123 06:54:11.013747 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-blcnj" event={"ID":"bfe36f0e-88d4-4103-afe6-4901ae147853","Type":"ContainerDied","Data":"e8cc3b1b58d560b3a011a3cafe8e5cdc909a8106b28b995f248cdb070b3716ed"} Jan 23 06:54:11 crc kubenswrapper[4937]: I0123 06:54:11.013789 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8cc3b1b58d560b3a011a3cafe8e5cdc909a8106b28b995f248cdb070b3716ed" Jan 23 06:54:11 crc kubenswrapper[4937]: I0123 06:54:11.013851 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-blcnj" Jan 23 06:54:11 crc kubenswrapper[4937]: I0123 06:54:11.016761 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2n4zb" event={"ID":"e02394c2-2975-4589-9670-7c69fa89cb1d","Type":"ContainerStarted","Data":"b9ac35dc3e6f33d835645aa3239cb119ec35d6af0b009accf4caa0095993a6fc"} Jan 23 06:54:12 crc kubenswrapper[4937]: I0123 06:54:12.027245 4937 generic.go:334] "Generic (PLEG): container finished" podID="4d0685ee-4785-4cba-906e-1dc96462dfd8" containerID="d40e35dade476c123ac0b49287dd7cf102fa07d297f806d70e4dea9011467907" exitCode=0 Jan 23 06:54:12 crc kubenswrapper[4937]: I0123 06:54:12.027314 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hmfzs" event={"ID":"4d0685ee-4785-4cba-906e-1dc96462dfd8","Type":"ContainerDied","Data":"d40e35dade476c123ac0b49287dd7cf102fa07d297f806d70e4dea9011467907"} Jan 23 06:54:12 crc kubenswrapper[4937]: I0123 06:54:12.030484 4937 generic.go:334] "Generic (PLEG): container finished" podID="93fa8840-692c-42de-9ee7-90b9a399aff7" containerID="30e9b3c28bc99f50d1eef034a2f9341c33ded7490074ebec69187e9aaeaddd4e" exitCode=0 Jan 23 06:54:12 crc kubenswrapper[4937]: I0123 06:54:12.030563 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-gq9c7" event={"ID":"93fa8840-692c-42de-9ee7-90b9a399aff7","Type":"ContainerDied","Data":"30e9b3c28bc99f50d1eef034a2f9341c33ded7490074ebec69187e9aaeaddd4e"} Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.432454 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-blcnj"] Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.439516 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hmfzs" Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.443482 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-blcnj"] Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.452291 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-gq9c7" Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.476577 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ts2t\" (UniqueName: \"kubernetes.io/projected/4d0685ee-4785-4cba-906e-1dc96462dfd8-kube-api-access-6ts2t\") pod \"4d0685ee-4785-4cba-906e-1dc96462dfd8\" (UID: \"4d0685ee-4785-4cba-906e-1dc96462dfd8\") " Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.476769 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93fa8840-692c-42de-9ee7-90b9a399aff7-db-sync-config-data\") pod \"93fa8840-692c-42de-9ee7-90b9a399aff7\" (UID: \"93fa8840-692c-42de-9ee7-90b9a399aff7\") " Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.476803 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d0685ee-4785-4cba-906e-1dc96462dfd8-combined-ca-bundle\") pod \"4d0685ee-4785-4cba-906e-1dc96462dfd8\" (UID: \"4d0685ee-4785-4cba-906e-1dc96462dfd8\") " Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.476825 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d0685ee-4785-4cba-906e-1dc96462dfd8-config-data\") pod \"4d0685ee-4785-4cba-906e-1dc96462dfd8\" (UID: \"4d0685ee-4785-4cba-906e-1dc96462dfd8\") " Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.476849 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93fa8840-692c-42de-9ee7-90b9a399aff7-combined-ca-bundle\") pod \"93fa8840-692c-42de-9ee7-90b9a399aff7\" (UID: \"93fa8840-692c-42de-9ee7-90b9a399aff7\") " Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.476887 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st5pj\" (UniqueName: \"kubernetes.io/projected/93fa8840-692c-42de-9ee7-90b9a399aff7-kube-api-access-st5pj\") pod \"93fa8840-692c-42de-9ee7-90b9a399aff7\" (UID: \"93fa8840-692c-42de-9ee7-90b9a399aff7\") " Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.476926 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93fa8840-692c-42de-9ee7-90b9a399aff7-config-data\") pod \"93fa8840-692c-42de-9ee7-90b9a399aff7\" (UID: \"93fa8840-692c-42de-9ee7-90b9a399aff7\") " Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.490426 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d0685ee-4785-4cba-906e-1dc96462dfd8-kube-api-access-6ts2t" (OuterVolumeSpecName: "kube-api-access-6ts2t") pod "4d0685ee-4785-4cba-906e-1dc96462dfd8" (UID: "4d0685ee-4785-4cba-906e-1dc96462dfd8"). InnerVolumeSpecName "kube-api-access-6ts2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.493588 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93fa8840-692c-42de-9ee7-90b9a399aff7-kube-api-access-st5pj" (OuterVolumeSpecName: "kube-api-access-st5pj") pod "93fa8840-692c-42de-9ee7-90b9a399aff7" (UID: "93fa8840-692c-42de-9ee7-90b9a399aff7"). InnerVolumeSpecName "kube-api-access-st5pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.507481 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93fa8840-692c-42de-9ee7-90b9a399aff7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "93fa8840-692c-42de-9ee7-90b9a399aff7" (UID: "93fa8840-692c-42de-9ee7-90b9a399aff7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.521333 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d0685ee-4785-4cba-906e-1dc96462dfd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d0685ee-4785-4cba-906e-1dc96462dfd8" (UID: "4d0685ee-4785-4cba-906e-1dc96462dfd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.536624 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d0685ee-4785-4cba-906e-1dc96462dfd8-config-data" (OuterVolumeSpecName: "config-data") pod "4d0685ee-4785-4cba-906e-1dc96462dfd8" (UID: "4d0685ee-4785-4cba-906e-1dc96462dfd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.537272 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93fa8840-692c-42de-9ee7-90b9a399aff7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93fa8840-692c-42de-9ee7-90b9a399aff7" (UID: "93fa8840-692c-42de-9ee7-90b9a399aff7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.544103 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93fa8840-692c-42de-9ee7-90b9a399aff7-config-data" (OuterVolumeSpecName: "config-data") pod "93fa8840-692c-42de-9ee7-90b9a399aff7" (UID: "93fa8840-692c-42de-9ee7-90b9a399aff7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.579504 4937 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93fa8840-692c-42de-9ee7-90b9a399aff7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.579538 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d0685ee-4785-4cba-906e-1dc96462dfd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.579547 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d0685ee-4785-4cba-906e-1dc96462dfd8-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.579560 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93fa8840-692c-42de-9ee7-90b9a399aff7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.579571 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st5pj\" (UniqueName: \"kubernetes.io/projected/93fa8840-692c-42de-9ee7-90b9a399aff7-kube-api-access-st5pj\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.579580 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93fa8840-692c-42de-9ee7-90b9a399aff7-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:13 crc kubenswrapper[4937]: I0123 06:54:13.579604 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ts2t\" (UniqueName: \"kubernetes.io/projected/4d0685ee-4785-4cba-906e-1dc96462dfd8-kube-api-access-6ts2t\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.048820 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hmfzs" event={"ID":"4d0685ee-4785-4cba-906e-1dc96462dfd8","Type":"ContainerDied","Data":"2f7e9a142e043cb5b0898bb8323536ba4a0d76a0aeea565db37d47d30be15981"} Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.048881 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f7e9a142e043cb5b0898bb8323536ba4a0d76a0aeea565db37d47d30be15981" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.048894 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hmfzs" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.051805 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-gq9c7" event={"ID":"93fa8840-692c-42de-9ee7-90b9a399aff7","Type":"ContainerDied","Data":"baa3d322005fe9d7892b292b1bcc21f9af858786699f573694376e505d78bafa"} Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.051893 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-gq9c7" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.051826 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baa3d322005fe9d7892b292b1bcc21f9af858786699f573694376e505d78bafa" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.279552 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57b64ddbd9-fhncp"] Jan 23 06:54:14 crc kubenswrapper[4937]: E0123 06:54:14.279902 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93fa8840-692c-42de-9ee7-90b9a399aff7" containerName="watcher-db-sync" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.279918 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="93fa8840-692c-42de-9ee7-90b9a399aff7" containerName="watcher-db-sync" Jan 23 06:54:14 crc kubenswrapper[4937]: E0123 06:54:14.279938 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d0685ee-4785-4cba-906e-1dc96462dfd8" containerName="keystone-db-sync" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.279944 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d0685ee-4785-4cba-906e-1dc96462dfd8" containerName="keystone-db-sync" Jan 23 06:54:14 crc kubenswrapper[4937]: E0123 06:54:14.279953 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe36f0e-88d4-4103-afe6-4901ae147853" containerName="mariadb-account-create-update" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.279961 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe36f0e-88d4-4103-afe6-4901ae147853" containerName="mariadb-account-create-update" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.280104 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d0685ee-4785-4cba-906e-1dc96462dfd8" containerName="keystone-db-sync" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.280118 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfe36f0e-88d4-4103-afe6-4901ae147853" containerName="mariadb-account-create-update" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.280138 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="93fa8840-692c-42de-9ee7-90b9a399aff7" containerName="watcher-db-sync" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.281024 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.311878 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57b64ddbd9-fhncp"] Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.392288 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xt5f4"] Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.393571 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.393695 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-ovsdbserver-nb\") pod \"dnsmasq-dns-57b64ddbd9-fhncp\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.393753 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-dns-swift-storage-0\") pod \"dnsmasq-dns-57b64ddbd9-fhncp\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.393777 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-dns-svc\") pod \"dnsmasq-dns-57b64ddbd9-fhncp\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.393846 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-config\") pod \"dnsmasq-dns-57b64ddbd9-fhncp\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.393881 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj7tt\" (UniqueName: \"kubernetes.io/projected/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-kube-api-access-mj7tt\") pod \"dnsmasq-dns-57b64ddbd9-fhncp\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.393900 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-ovsdbserver-sb\") pod \"dnsmasq-dns-57b64ddbd9-fhncp\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.396014 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.396455 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.396796 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mg5cr" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.400119 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.407710 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.428654 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xt5f4"] Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.494965 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-ovsdbserver-sb\") pod \"dnsmasq-dns-57b64ddbd9-fhncp\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.495049 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-ovsdbserver-nb\") pod \"dnsmasq-dns-57b64ddbd9-fhncp\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.495102 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2lbx\" (UniqueName: \"kubernetes.io/projected/43768d25-8cd7-4078-baa6-7afc37a1c276-kube-api-access-g2lbx\") pod \"keystone-bootstrap-xt5f4\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.495126 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-scripts\") pod \"keystone-bootstrap-xt5f4\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.495165 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-dns-swift-storage-0\") pod \"dnsmasq-dns-57b64ddbd9-fhncp\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.495191 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-dns-svc\") pod \"dnsmasq-dns-57b64ddbd9-fhncp\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.495269 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-credential-keys\") pod \"keystone-bootstrap-xt5f4\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.495300 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-fernet-keys\") pod \"keystone-bootstrap-xt5f4\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.495328 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-config\") pod \"dnsmasq-dns-57b64ddbd9-fhncp\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.495368 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-config-data\") pod \"keystone-bootstrap-xt5f4\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.495400 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-combined-ca-bundle\") pod \"keystone-bootstrap-xt5f4\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.495434 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj7tt\" (UniqueName: \"kubernetes.io/projected/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-kube-api-access-mj7tt\") pod \"dnsmasq-dns-57b64ddbd9-fhncp\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.495956 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-ovsdbserver-sb\") pod \"dnsmasq-dns-57b64ddbd9-fhncp\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.496438 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-ovsdbserver-nb\") pod \"dnsmasq-dns-57b64ddbd9-fhncp\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.496750 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-dns-swift-storage-0\") pod \"dnsmasq-dns-57b64ddbd9-fhncp\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.497166 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-config\") pod \"dnsmasq-dns-57b64ddbd9-fhncp\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.497412 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-dns-svc\") pod \"dnsmasq-dns-57b64ddbd9-fhncp\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.512347 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.513618 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.522815 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-4tn9v" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.528644 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.531931 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj7tt\" (UniqueName: \"kubernetes.io/projected/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-kube-api-access-mj7tt\") pod \"dnsmasq-dns-57b64ddbd9-fhncp\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.563753 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfe36f0e-88d4-4103-afe6-4901ae147853" path="/var/lib/kubelet/pods/bfe36f0e-88d4-4103-afe6-4901ae147853/volumes" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.567202 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.568822 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.572918 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.597757 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-credential-keys\") pod \"keystone-bootstrap-xt5f4\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.597842 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-fernet-keys\") pod \"keystone-bootstrap-xt5f4\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.597899 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-config-data\") pod \"keystone-bootstrap-xt5f4\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.597925 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-combined-ca-bundle\") pod \"keystone-bootstrap-xt5f4\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.597996 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2lbx\" (UniqueName: \"kubernetes.io/projected/43768d25-8cd7-4078-baa6-7afc37a1c276-kube-api-access-g2lbx\") pod \"keystone-bootstrap-xt5f4\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.598019 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-scripts\") pod \"keystone-bootstrap-xt5f4\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.612796 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.613685 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-combined-ca-bundle\") pod \"keystone-bootstrap-xt5f4\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.617650 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-config-data\") pod \"keystone-bootstrap-xt5f4\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.620445 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-fernet-keys\") pod \"keystone-bootstrap-xt5f4\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.620940 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-credential-keys\") pod \"keystone-bootstrap-xt5f4\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.626765 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.631850 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-scripts\") pod \"keystone-bootstrap-xt5f4\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.655696 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.683582 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-57b9678b9-lvjkg"] Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.685764 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57b9678b9-lvjkg" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.696530 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.696843 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-2qgr6" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.707606 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ed91bb-3033-4e77-8378-6cbdb382dc98-config-data\") pod \"watcher-applier-0\" (UID: \"51ed91bb-3033-4e77-8378-6cbdb382dc98\") " pod="openstack/watcher-applier-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.707681 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.707728 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gfg5\" (UniqueName: \"kubernetes.io/projected/52ed7619-5046-4b7c-b8a1-458d6831b92b-kube-api-access-8gfg5\") pod \"watcher-decision-engine-0\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.707752 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ed7619-5046-4b7c-b8a1-458d6831b92b-logs\") pod \"watcher-decision-engine-0\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.707769 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.707803 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfhb9\" (UniqueName: \"kubernetes.io/projected/51ed91bb-3033-4e77-8378-6cbdb382dc98-kube-api-access-zfhb9\") pod \"watcher-applier-0\" (UID: \"51ed91bb-3033-4e77-8378-6cbdb382dc98\") " pod="openstack/watcher-applier-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.707830 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.707850 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ed91bb-3033-4e77-8378-6cbdb382dc98-logs\") pod \"watcher-applier-0\" (UID: \"51ed91bb-3033-4e77-8378-6cbdb382dc98\") " pod="openstack/watcher-applier-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.707872 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ed91bb-3033-4e77-8378-6cbdb382dc98-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"51ed91bb-3033-4e77-8378-6cbdb382dc98\") " pod="openstack/watcher-applier-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.708292 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2lbx\" (UniqueName: \"kubernetes.io/projected/43768d25-8cd7-4078-baa6-7afc37a1c276-kube-api-access-g2lbx\") pod \"keystone-bootstrap-xt5f4\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.727755 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.730313 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.739076 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.756534 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.762886 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.779898 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.809928 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7830585d-2333-44d0-8d49-284d149ded04-scripts\") pod \"horizon-57b9678b9-lvjkg\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " pod="openstack/horizon-57b9678b9-lvjkg" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.810273 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7830585d-2333-44d0-8d49-284d149ded04-config-data\") pod \"horizon-57b9678b9-lvjkg\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " pod="openstack/horizon-57b9678b9-lvjkg" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.810406 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7830585d-2333-44d0-8d49-284d149ded04-logs\") pod \"horizon-57b9678b9-lvjkg\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " pod="openstack/horizon-57b9678b9-lvjkg" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.810547 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7830585d-2333-44d0-8d49-284d149ded04-horizon-secret-key\") pod \"horizon-57b9678b9-lvjkg\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " pod="openstack/horizon-57b9678b9-lvjkg" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.810689 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ed91bb-3033-4e77-8378-6cbdb382dc98-config-data\") pod \"watcher-applier-0\" (UID: \"51ed91bb-3033-4e77-8378-6cbdb382dc98\") " pod="openstack/watcher-applier-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.810832 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.810979 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gfg5\" (UniqueName: \"kubernetes.io/projected/52ed7619-5046-4b7c-b8a1-458d6831b92b-kube-api-access-8gfg5\") pod \"watcher-decision-engine-0\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.811199 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gftpv\" (UniqueName: \"kubernetes.io/projected/7830585d-2333-44d0-8d49-284d149ded04-kube-api-access-gftpv\") pod \"horizon-57b9678b9-lvjkg\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " pod="openstack/horizon-57b9678b9-lvjkg" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.811319 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ed7619-5046-4b7c-b8a1-458d6831b92b-logs\") pod \"watcher-decision-engine-0\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.811395 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.811482 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfhb9\" (UniqueName: \"kubernetes.io/projected/51ed91bb-3033-4e77-8378-6cbdb382dc98-kube-api-access-zfhb9\") pod \"watcher-applier-0\" (UID: \"51ed91bb-3033-4e77-8378-6cbdb382dc98\") " pod="openstack/watcher-applier-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.811578 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.811714 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ed91bb-3033-4e77-8378-6cbdb382dc98-logs\") pod \"watcher-applier-0\" (UID: \"51ed91bb-3033-4e77-8378-6cbdb382dc98\") " pod="openstack/watcher-applier-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.811827 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ed91bb-3033-4e77-8378-6cbdb382dc98-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"51ed91bb-3033-4e77-8378-6cbdb382dc98\") " pod="openstack/watcher-applier-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.816442 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ed91bb-3033-4e77-8378-6cbdb382dc98-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"51ed91bb-3033-4e77-8378-6cbdb382dc98\") " pod="openstack/watcher-applier-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.821425 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ed91bb-3033-4e77-8378-6cbdb382dc98-config-data\") pod \"watcher-applier-0\" (UID: \"51ed91bb-3033-4e77-8378-6cbdb382dc98\") " pod="openstack/watcher-applier-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.821639 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ed7619-5046-4b7c-b8a1-458d6831b92b-logs\") pod \"watcher-decision-engine-0\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.827179 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ed91bb-3033-4e77-8378-6cbdb382dc98-logs\") pod \"watcher-applier-0\" (UID: \"51ed91bb-3033-4e77-8378-6cbdb382dc98\") " pod="openstack/watcher-applier-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.839092 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.850142 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57b9678b9-lvjkg"] Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.861499 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.866665 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.945726 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f0cf1c-6bdf-4c85-90ed-b49e620682af-config-data\") pod \"watcher-api-0\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " pod="openstack/watcher-api-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.945835 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gftpv\" (UniqueName: \"kubernetes.io/projected/7830585d-2333-44d0-8d49-284d149ded04-kube-api-access-gftpv\") pod \"horizon-57b9678b9-lvjkg\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " pod="openstack/horizon-57b9678b9-lvjkg" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.945911 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/39f0cf1c-6bdf-4c85-90ed-b49e620682af-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " pod="openstack/watcher-api-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.945952 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwsmr\" (UniqueName: \"kubernetes.io/projected/39f0cf1c-6bdf-4c85-90ed-b49e620682af-kube-api-access-vwsmr\") pod \"watcher-api-0\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " pod="openstack/watcher-api-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.946037 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7830585d-2333-44d0-8d49-284d149ded04-scripts\") pod \"horizon-57b9678b9-lvjkg\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " pod="openstack/horizon-57b9678b9-lvjkg" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.946073 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7830585d-2333-44d0-8d49-284d149ded04-config-data\") pod \"horizon-57b9678b9-lvjkg\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " pod="openstack/horizon-57b9678b9-lvjkg" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.946099 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7830585d-2333-44d0-8d49-284d149ded04-logs\") pod \"horizon-57b9678b9-lvjkg\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " pod="openstack/horizon-57b9678b9-lvjkg" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.946135 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7830585d-2333-44d0-8d49-284d149ded04-horizon-secret-key\") pod \"horizon-57b9678b9-lvjkg\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " pod="openstack/horizon-57b9678b9-lvjkg" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.946156 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f0cf1c-6bdf-4c85-90ed-b49e620682af-logs\") pod \"watcher-api-0\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " pod="openstack/watcher-api-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.946186 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f0cf1c-6bdf-4c85-90ed-b49e620682af-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " pod="openstack/watcher-api-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.957447 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfhb9\" (UniqueName: \"kubernetes.io/projected/51ed91bb-3033-4e77-8378-6cbdb382dc98-kube-api-access-zfhb9\") pod \"watcher-applier-0\" (UID: \"51ed91bb-3033-4e77-8378-6cbdb382dc98\") " pod="openstack/watcher-applier-0" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.960354 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7830585d-2333-44d0-8d49-284d149ded04-logs\") pod \"horizon-57b9678b9-lvjkg\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " pod="openstack/horizon-57b9678b9-lvjkg" Jan 23 06:54:14 crc kubenswrapper[4937]: I0123 06:54:14.982104 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.012224 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7830585d-2333-44d0-8d49-284d149ded04-scripts\") pod \"horizon-57b9678b9-lvjkg\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " pod="openstack/horizon-57b9678b9-lvjkg" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.017213 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7830585d-2333-44d0-8d49-284d149ded04-horizon-secret-key\") pod \"horizon-57b9678b9-lvjkg\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " pod="openstack/horizon-57b9678b9-lvjkg" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.031090 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7830585d-2333-44d0-8d49-284d149ded04-config-data\") pod \"horizon-57b9678b9-lvjkg\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " pod="openstack/horizon-57b9678b9-lvjkg" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.053608 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f0cf1c-6bdf-4c85-90ed-b49e620682af-logs\") pod \"watcher-api-0\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " pod="openstack/watcher-api-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.053675 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f0cf1c-6bdf-4c85-90ed-b49e620682af-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " pod="openstack/watcher-api-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.053747 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f0cf1c-6bdf-4c85-90ed-b49e620682af-config-data\") pod \"watcher-api-0\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " pod="openstack/watcher-api-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.053840 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/39f0cf1c-6bdf-4c85-90ed-b49e620682af-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " pod="openstack/watcher-api-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.053875 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwsmr\" (UniqueName: \"kubernetes.io/projected/39f0cf1c-6bdf-4c85-90ed-b49e620682af-kube-api-access-vwsmr\") pod \"watcher-api-0\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " pod="openstack/watcher-api-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.054086 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f0cf1c-6bdf-4c85-90ed-b49e620682af-logs\") pod \"watcher-api-0\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " pod="openstack/watcher-api-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.058186 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-zjhvz"] Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.059841 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gfg5\" (UniqueName: \"kubernetes.io/projected/52ed7619-5046-4b7c-b8a1-458d6831b92b-kube-api-access-8gfg5\") pod \"watcher-decision-engine-0\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.067998 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gftpv\" (UniqueName: \"kubernetes.io/projected/7830585d-2333-44d0-8d49-284d149ded04-kube-api-access-gftpv\") pod \"horizon-57b9678b9-lvjkg\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " pod="openstack/horizon-57b9678b9-lvjkg" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.076152 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f0cf1c-6bdf-4c85-90ed-b49e620682af-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " pod="openstack/watcher-api-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.079475 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.085769 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f0cf1c-6bdf-4c85-90ed-b49e620682af-config-data\") pod \"watcher-api-0\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " pod="openstack/watcher-api-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.089296 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/39f0cf1c-6bdf-4c85-90ed-b49e620682af-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " pod="openstack/watcher-api-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.100285 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.107257 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.116950 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.117224 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.118200 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bxfcj" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.143676 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.176369 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwsmr\" (UniqueName: \"kubernetes.io/projected/39f0cf1c-6bdf-4c85-90ed-b49e620682af-kube-api-access-vwsmr\") pod \"watcher-api-0\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " pod="openstack/watcher-api-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.183245 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.194299 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.195046 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57b9678b9-lvjkg" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.199313 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.206231 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-combined-ca-bundle\") pod \"cinder-db-sync-zjhvz\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.206269 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-scripts\") pod \"cinder-db-sync-zjhvz\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.206310 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-db-sync-config-data\") pod \"cinder-db-sync-zjhvz\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.206329 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f77dc563-57f2-4c47-a627-98d15343173b-etc-machine-id\") pod \"cinder-db-sync-zjhvz\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.206350 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-config-data\") pod \"cinder-db-sync-zjhvz\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.206453 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g49k\" (UniqueName: \"kubernetes.io/projected/f77dc563-57f2-4c47-a627-98d15343173b-kube-api-access-7g49k\") pod \"cinder-db-sync-zjhvz\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.295899 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-gkrpw"] Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.297343 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gkrpw" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.301555 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xp7hm" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.301865 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.310460 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.310551 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d82e186e-8995-43ff-a65f-a1918f8495bf-log-httpd\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.310582 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-scripts\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.310648 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d82e186e-8995-43ff-a65f-a1918f8495bf-run-httpd\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.310713 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-config-data\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.310761 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g49k\" (UniqueName: \"kubernetes.io/projected/f77dc563-57f2-4c47-a627-98d15343173b-kube-api-access-7g49k\") pod \"cinder-db-sync-zjhvz\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.310795 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf8qk\" (UniqueName: \"kubernetes.io/projected/d82e186e-8995-43ff-a65f-a1918f8495bf-kube-api-access-xf8qk\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.313633 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-combined-ca-bundle\") pod \"cinder-db-sync-zjhvz\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.313684 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-scripts\") pod \"cinder-db-sync-zjhvz\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.313738 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.313804 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-db-sync-config-data\") pod \"cinder-db-sync-zjhvz\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.313827 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f77dc563-57f2-4c47-a627-98d15343173b-etc-machine-id\") pod \"cinder-db-sync-zjhvz\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.313866 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-config-data\") pod \"cinder-db-sync-zjhvz\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.320555 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f77dc563-57f2-4c47-a627-98d15343173b-etc-machine-id\") pod \"cinder-db-sync-zjhvz\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.321022 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.349414 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-zjhvz"] Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.369396 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g49k\" (UniqueName: \"kubernetes.io/projected/f77dc563-57f2-4c47-a627-98d15343173b-kube-api-access-7g49k\") pod \"cinder-db-sync-zjhvz\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.369460 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.371478 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-combined-ca-bundle\") pod \"cinder-db-sync-zjhvz\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.372219 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-db-sync-config-data\") pod \"cinder-db-sync-zjhvz\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.373158 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-config-data\") pod \"cinder-db-sync-zjhvz\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.381902 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-scripts\") pod \"cinder-db-sync-zjhvz\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.390303 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-gkrpw"] Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.412109 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-xgx9n"] Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.413635 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xgx9n" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.415822 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf8qk\" (UniqueName: \"kubernetes.io/projected/d82e186e-8995-43ff-a65f-a1918f8495bf-kube-api-access-xf8qk\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.415879 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.415930 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47488e25-41b9-46e1-8ad7-5cfe2c4654c7-db-sync-config-data\") pod \"barbican-db-sync-gkrpw\" (UID: \"47488e25-41b9-46e1-8ad7-5cfe2c4654c7\") " pod="openstack/barbican-db-sync-gkrpw" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.415961 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47488e25-41b9-46e1-8ad7-5cfe2c4654c7-combined-ca-bundle\") pod \"barbican-db-sync-gkrpw\" (UID: \"47488e25-41b9-46e1-8ad7-5cfe2c4654c7\") " pod="openstack/barbican-db-sync-gkrpw" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.415995 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.416033 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d82e186e-8995-43ff-a65f-a1918f8495bf-log-httpd\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.416055 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-scripts\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.416087 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d82e186e-8995-43ff-a65f-a1918f8495bf-run-httpd\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.416130 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-config-data\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.416150 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn7nk\" (UniqueName: \"kubernetes.io/projected/47488e25-41b9-46e1-8ad7-5cfe2c4654c7-kube-api-access-vn7nk\") pod \"barbican-db-sync-gkrpw\" (UID: \"47488e25-41b9-46e1-8ad7-5cfe2c4654c7\") " pod="openstack/barbican-db-sync-gkrpw" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.419428 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-8jg5b" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.419698 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.419974 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.420315 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.422278 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d82e186e-8995-43ff-a65f-a1918f8495bf-log-httpd\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.422569 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d82e186e-8995-43ff-a65f-a1918f8495bf-run-httpd\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.441463 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-scripts\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.441690 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.449669 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-config-data\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.449677 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7f5ff7ccd9-lvv8r"] Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.451288 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f5ff7ccd9-lvv8r" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.462623 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf8qk\" (UniqueName: \"kubernetes.io/projected/d82e186e-8995-43ff-a65f-a1918f8495bf-kube-api-access-xf8qk\") pod \"ceilometer-0\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.475696 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-xgx9n"] Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.496311 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57b64ddbd9-fhncp"] Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.506809 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f5ff7ccd9-lvv8r"] Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.517888 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-scripts\") pod \"horizon-7f5ff7ccd9-lvv8r\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " pod="openstack/horizon-7f5ff7ccd9-lvv8r" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.517937 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54t5h\" (UniqueName: \"kubernetes.io/projected/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-kube-api-access-54t5h\") pod \"horizon-7f5ff7ccd9-lvv8r\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " pod="openstack/horizon-7f5ff7ccd9-lvv8r" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.518200 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn7nk\" (UniqueName: \"kubernetes.io/projected/47488e25-41b9-46e1-8ad7-5cfe2c4654c7-kube-api-access-vn7nk\") pod \"barbican-db-sync-gkrpw\" (UID: \"47488e25-41b9-46e1-8ad7-5cfe2c4654c7\") " pod="openstack/barbican-db-sync-gkrpw" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.518310 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-logs\") pod \"horizon-7f5ff7ccd9-lvv8r\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " pod="openstack/horizon-7f5ff7ccd9-lvv8r" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.518348 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-horizon-secret-key\") pod \"horizon-7f5ff7ccd9-lvv8r\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " pod="openstack/horizon-7f5ff7ccd9-lvv8r" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.518407 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47488e25-41b9-46e1-8ad7-5cfe2c4654c7-db-sync-config-data\") pod \"barbican-db-sync-gkrpw\" (UID: \"47488e25-41b9-46e1-8ad7-5cfe2c4654c7\") " pod="openstack/barbican-db-sync-gkrpw" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.518428 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-config-data\") pod \"horizon-7f5ff7ccd9-lvv8r\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " pod="openstack/horizon-7f5ff7ccd9-lvv8r" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.518466 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2537508-9450-4931-b4a4-d87cfdaa4a77-combined-ca-bundle\") pod \"neutron-db-sync-xgx9n\" (UID: \"f2537508-9450-4931-b4a4-d87cfdaa4a77\") " pod="openstack/neutron-db-sync-xgx9n" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.518486 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f2537508-9450-4931-b4a4-d87cfdaa4a77-config\") pod \"neutron-db-sync-xgx9n\" (UID: \"f2537508-9450-4931-b4a4-d87cfdaa4a77\") " pod="openstack/neutron-db-sync-xgx9n" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.518517 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47488e25-41b9-46e1-8ad7-5cfe2c4654c7-combined-ca-bundle\") pod \"barbican-db-sync-gkrpw\" (UID: \"47488e25-41b9-46e1-8ad7-5cfe2c4654c7\") " pod="openstack/barbican-db-sync-gkrpw" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.518569 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rwqs\" (UniqueName: \"kubernetes.io/projected/f2537508-9450-4931-b4a4-d87cfdaa4a77-kube-api-access-8rwqs\") pod \"neutron-db-sync-xgx9n\" (UID: \"f2537508-9450-4931-b4a4-d87cfdaa4a77\") " pod="openstack/neutron-db-sync-xgx9n" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.524635 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47488e25-41b9-46e1-8ad7-5cfe2c4654c7-combined-ca-bundle\") pod \"barbican-db-sync-gkrpw\" (UID: \"47488e25-41b9-46e1-8ad7-5cfe2c4654c7\") " pod="openstack/barbican-db-sync-gkrpw" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.529953 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47488e25-41b9-46e1-8ad7-5cfe2c4654c7-db-sync-config-data\") pod \"barbican-db-sync-gkrpw\" (UID: \"47488e25-41b9-46e1-8ad7-5cfe2c4654c7\") " pod="openstack/barbican-db-sync-gkrpw" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.534579 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.545800 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-gbhl4"] Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.547014 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gbhl4" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.556520 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.556905 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.556979 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6664758949-2wffl"] Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.561329 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.563389 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-g8n99" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.568046 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.568560 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn7nk\" (UniqueName: \"kubernetes.io/projected/47488e25-41b9-46e1-8ad7-5cfe2c4654c7-kube-api-access-vn7nk\") pod \"barbican-db-sync-gkrpw\" (UID: \"47488e25-41b9-46e1-8ad7-5cfe2c4654c7\") " pod="openstack/barbican-db-sync-gkrpw" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.585851 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6664758949-2wffl"] Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.622536 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54t5h\" (UniqueName: \"kubernetes.io/projected/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-kube-api-access-54t5h\") pod \"horizon-7f5ff7ccd9-lvv8r\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " pod="openstack/horizon-7f5ff7ccd9-lvv8r" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.622616 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q62hw\" (UniqueName: \"kubernetes.io/projected/93868e10-2b6e-4517-a01e-885cdd05fbd8-kube-api-access-q62hw\") pod \"dnsmasq-dns-6664758949-2wffl\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.622643 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-combined-ca-bundle\") pod \"placement-db-sync-gbhl4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " pod="openstack/placement-db-sync-gbhl4" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.622665 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-dns-swift-storage-0\") pod \"dnsmasq-dns-6664758949-2wffl\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.622687 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-dns-svc\") pod \"dnsmasq-dns-6664758949-2wffl\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.622735 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-logs\") pod \"horizon-7f5ff7ccd9-lvv8r\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " pod="openstack/horizon-7f5ff7ccd9-lvv8r" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.622757 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-horizon-secret-key\") pod \"horizon-7f5ff7ccd9-lvv8r\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " pod="openstack/horizon-7f5ff7ccd9-lvv8r" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.622775 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-config-data\") pod \"placement-db-sync-gbhl4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " pod="openstack/placement-db-sync-gbhl4" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.622798 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2tck\" (UniqueName: \"kubernetes.io/projected/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-kube-api-access-z2tck\") pod \"placement-db-sync-gbhl4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " pod="openstack/placement-db-sync-gbhl4" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.622818 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-config-data\") pod \"horizon-7f5ff7ccd9-lvv8r\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " pod="openstack/horizon-7f5ff7ccd9-lvv8r" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.622836 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-config\") pod \"dnsmasq-dns-6664758949-2wffl\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.622859 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2537508-9450-4931-b4a4-d87cfdaa4a77-combined-ca-bundle\") pod \"neutron-db-sync-xgx9n\" (UID: \"f2537508-9450-4931-b4a4-d87cfdaa4a77\") " pod="openstack/neutron-db-sync-xgx9n" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.622874 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-scripts\") pod \"placement-db-sync-gbhl4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " pod="openstack/placement-db-sync-gbhl4" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.622890 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f2537508-9450-4931-b4a4-d87cfdaa4a77-config\") pod \"neutron-db-sync-xgx9n\" (UID: \"f2537508-9450-4931-b4a4-d87cfdaa4a77\") " pod="openstack/neutron-db-sync-xgx9n" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.622918 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-logs\") pod \"placement-db-sync-gbhl4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " pod="openstack/placement-db-sync-gbhl4" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.622936 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rwqs\" (UniqueName: \"kubernetes.io/projected/f2537508-9450-4931-b4a4-d87cfdaa4a77-kube-api-access-8rwqs\") pod \"neutron-db-sync-xgx9n\" (UID: \"f2537508-9450-4931-b4a4-d87cfdaa4a77\") " pod="openstack/neutron-db-sync-xgx9n" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.622981 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-ovsdbserver-nb\") pod \"dnsmasq-dns-6664758949-2wffl\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.623001 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-ovsdbserver-sb\") pod \"dnsmasq-dns-6664758949-2wffl\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.623017 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-scripts\") pod \"horizon-7f5ff7ccd9-lvv8r\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " pod="openstack/horizon-7f5ff7ccd9-lvv8r" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.623798 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-scripts\") pod \"horizon-7f5ff7ccd9-lvv8r\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " pod="openstack/horizon-7f5ff7ccd9-lvv8r" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.624325 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-logs\") pod \"horizon-7f5ff7ccd9-lvv8r\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " pod="openstack/horizon-7f5ff7ccd9-lvv8r" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.626834 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-config-data\") pod \"horizon-7f5ff7ccd9-lvv8r\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " pod="openstack/horizon-7f5ff7ccd9-lvv8r" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.631384 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-horizon-secret-key\") pod \"horizon-7f5ff7ccd9-lvv8r\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " pod="openstack/horizon-7f5ff7ccd9-lvv8r" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.637557 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f2537508-9450-4931-b4a4-d87cfdaa4a77-config\") pod \"neutron-db-sync-xgx9n\" (UID: \"f2537508-9450-4931-b4a4-d87cfdaa4a77\") " pod="openstack/neutron-db-sync-xgx9n" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.637634 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-gbhl4"] Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.640244 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2537508-9450-4931-b4a4-d87cfdaa4a77-combined-ca-bundle\") pod \"neutron-db-sync-xgx9n\" (UID: \"f2537508-9450-4931-b4a4-d87cfdaa4a77\") " pod="openstack/neutron-db-sync-xgx9n" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.650182 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54t5h\" (UniqueName: \"kubernetes.io/projected/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-kube-api-access-54t5h\") pod \"horizon-7f5ff7ccd9-lvv8r\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " pod="openstack/horizon-7f5ff7ccd9-lvv8r" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.652332 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rwqs\" (UniqueName: \"kubernetes.io/projected/f2537508-9450-4931-b4a4-d87cfdaa4a77-kube-api-access-8rwqs\") pod \"neutron-db-sync-xgx9n\" (UID: \"f2537508-9450-4931-b4a4-d87cfdaa4a77\") " pod="openstack/neutron-db-sync-xgx9n" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.677249 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gkrpw" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.727497 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-ovsdbserver-nb\") pod \"dnsmasq-dns-6664758949-2wffl\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.727558 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-ovsdbserver-sb\") pod \"dnsmasq-dns-6664758949-2wffl\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.727629 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q62hw\" (UniqueName: \"kubernetes.io/projected/93868e10-2b6e-4517-a01e-885cdd05fbd8-kube-api-access-q62hw\") pod \"dnsmasq-dns-6664758949-2wffl\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.727665 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-combined-ca-bundle\") pod \"placement-db-sync-gbhl4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " pod="openstack/placement-db-sync-gbhl4" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.727695 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-dns-swift-storage-0\") pod \"dnsmasq-dns-6664758949-2wffl\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.727720 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-dns-svc\") pod \"dnsmasq-dns-6664758949-2wffl\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.727787 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-config-data\") pod \"placement-db-sync-gbhl4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " pod="openstack/placement-db-sync-gbhl4" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.727819 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2tck\" (UniqueName: \"kubernetes.io/projected/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-kube-api-access-z2tck\") pod \"placement-db-sync-gbhl4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " pod="openstack/placement-db-sync-gbhl4" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.727849 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-config\") pod \"dnsmasq-dns-6664758949-2wffl\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.727876 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-scripts\") pod \"placement-db-sync-gbhl4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " pod="openstack/placement-db-sync-gbhl4" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.727915 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-logs\") pod \"placement-db-sync-gbhl4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " pod="openstack/placement-db-sync-gbhl4" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.728545 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-logs\") pod \"placement-db-sync-gbhl4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " pod="openstack/placement-db-sync-gbhl4" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.729166 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-config\") pod \"dnsmasq-dns-6664758949-2wffl\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.729633 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-dns-svc\") pod \"dnsmasq-dns-6664758949-2wffl\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.732017 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-ovsdbserver-sb\") pod \"dnsmasq-dns-6664758949-2wffl\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.732462 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-ovsdbserver-nb\") pod \"dnsmasq-dns-6664758949-2wffl\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.735675 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xt5f4"] Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.740778 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-scripts\") pod \"placement-db-sync-gbhl4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " pod="openstack/placement-db-sync-gbhl4" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.742028 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-dns-swift-storage-0\") pod \"dnsmasq-dns-6664758949-2wffl\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.754229 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-combined-ca-bundle\") pod \"placement-db-sync-gbhl4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " pod="openstack/placement-db-sync-gbhl4" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.779940 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-config-data\") pod \"placement-db-sync-gbhl4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " pod="openstack/placement-db-sync-gbhl4" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.783897 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2tck\" (UniqueName: \"kubernetes.io/projected/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-kube-api-access-z2tck\") pod \"placement-db-sync-gbhl4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " pod="openstack/placement-db-sync-gbhl4" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.787581 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xgx9n" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.797381 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q62hw\" (UniqueName: \"kubernetes.io/projected/93868e10-2b6e-4517-a01e-885cdd05fbd8-kube-api-access-q62hw\") pod \"dnsmasq-dns-6664758949-2wffl\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.810756 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f5ff7ccd9-lvv8r" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.840741 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gbhl4" Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.871913 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57b64ddbd9-fhncp"] Jan 23 06:54:15 crc kubenswrapper[4937]: I0123 06:54:15.872643 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:54:15 crc kubenswrapper[4937]: W0123 06:54:15.923169 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90051c59_196e_4c8f_8ea6_31c1c4ce8cc0.slice/crio-98577f310f26ab7aac7154ea21ef04e219d61553aae1e206bcb0f5261adbeac9 WatchSource:0}: Error finding container 98577f310f26ab7aac7154ea21ef04e219d61553aae1e206bcb0f5261adbeac9: Status 404 returned error can't find the container with id 98577f310f26ab7aac7154ea21ef04e219d61553aae1e206bcb0f5261adbeac9 Jan 23 06:54:16 crc kubenswrapper[4937]: I0123 06:54:16.147114 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xt5f4" event={"ID":"43768d25-8cd7-4078-baa6-7afc37a1c276","Type":"ContainerStarted","Data":"d6cba724ab5687620111a8a299b66b457e389e2f5361d8c4de74d7f4d230e3f0"} Jan 23 06:54:16 crc kubenswrapper[4937]: I0123 06:54:16.148433 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" event={"ID":"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0","Type":"ContainerStarted","Data":"98577f310f26ab7aac7154ea21ef04e219d61553aae1e206bcb0f5261adbeac9"} Jan 23 06:54:16 crc kubenswrapper[4937]: I0123 06:54:16.154650 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:54:16 crc kubenswrapper[4937]: I0123 06:54:16.171136 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 23 06:54:16 crc kubenswrapper[4937]: I0123 06:54:16.233731 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57b9678b9-lvjkg"] Jan 23 06:54:16 crc kubenswrapper[4937]: I0123 06:54:16.447455 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-zjhvz"] Jan 23 06:54:16 crc kubenswrapper[4937]: I0123 06:54:16.454685 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:54:16 crc kubenswrapper[4937]: I0123 06:54:16.461798 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-gkrpw"] Jan 23 06:54:16 crc kubenswrapper[4937]: I0123 06:54:16.581125 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:54:16 crc kubenswrapper[4937]: I0123 06:54:16.593912 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-xgx9n"] Jan 23 06:54:16 crc kubenswrapper[4937]: I0123 06:54:16.605032 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f5ff7ccd9-lvv8r"] Jan 23 06:54:16 crc kubenswrapper[4937]: I0123 06:54:16.729162 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-gbhl4"] Jan 23 06:54:16 crc kubenswrapper[4937]: W0123 06:54:16.731876 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71dcc2c8_b02c_4203_9fa8_1af6e12615d4.slice/crio-095323cd0591b346259b82b6faa1e1c06903578731bb92d66bd8fee3579a278b WatchSource:0}: Error finding container 095323cd0591b346259b82b6faa1e1c06903578731bb92d66bd8fee3579a278b: Status 404 returned error can't find the container with id 095323cd0591b346259b82b6faa1e1c06903578731bb92d66bd8fee3579a278b Jan 23 06:54:16 crc kubenswrapper[4937]: I0123 06:54:16.740580 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6664758949-2wffl"] Jan 23 06:54:17 crc kubenswrapper[4937]: I0123 06:54:17.160143 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6664758949-2wffl" event={"ID":"93868e10-2b6e-4517-a01e-885cdd05fbd8","Type":"ContainerStarted","Data":"426eef3558dd1260443e6c747f7f8857e41409844046d95e7eb2f6b07c21b5f7"} Jan 23 06:54:17 crc kubenswrapper[4937]: I0123 06:54:17.161450 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gkrpw" event={"ID":"47488e25-41b9-46e1-8ad7-5cfe2c4654c7","Type":"ContainerStarted","Data":"d18556cba06a73768d2c0ab3985edd9a47ecceff715f5214e29c0dce4ae7465c"} Jan 23 06:54:17 crc kubenswrapper[4937]: I0123 06:54:17.163825 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gbhl4" event={"ID":"71dcc2c8-b02c-4203-9fa8-1af6e12615d4","Type":"ContainerStarted","Data":"095323cd0591b346259b82b6faa1e1c06903578731bb92d66bd8fee3579a278b"} Jan 23 06:54:17 crc kubenswrapper[4937]: I0123 06:54:17.165031 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xgx9n" event={"ID":"f2537508-9450-4931-b4a4-d87cfdaa4a77","Type":"ContainerStarted","Data":"3fff7a39b181c8b11687bf4c6fe78cb5d2bce1346137f2a64e2cd836b4ddd499"} Jan 23 06:54:17 crc kubenswrapper[4937]: I0123 06:54:17.168208 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"51ed91bb-3033-4e77-8378-6cbdb382dc98","Type":"ContainerStarted","Data":"98ca4777aabc721acd069db2120f6c9fbb94953c11bfe70c3b11ea488ff76e31"} Jan 23 06:54:17 crc kubenswrapper[4937]: I0123 06:54:17.174134 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f5ff7ccd9-lvv8r" event={"ID":"5d244d9a-0e45-4115-a96a-fa7e2885c6f4","Type":"ContainerStarted","Data":"3ef16ccfc7e0d088f8e46b4d7ef03e0c8f77a7b0963b839ad8f5488596b3b7bc"} Jan 23 06:54:17 crc kubenswrapper[4937]: I0123 06:54:17.175886 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"52ed7619-5046-4b7c-b8a1-458d6831b92b","Type":"ContainerStarted","Data":"05c98d103508cbe7eed792bdd72ca7ca5935c4cea7b1ed942d3e78a4589baaa1"} Jan 23 06:54:17 crc kubenswrapper[4937]: I0123 06:54:17.184039 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57b9678b9-lvjkg" event={"ID":"7830585d-2333-44d0-8d49-284d149ded04","Type":"ContainerStarted","Data":"4630bc30c5834d82b1733b92148a58c1e5c03ee370c2e01abc9b9469422872eb"} Jan 23 06:54:17 crc kubenswrapper[4937]: I0123 06:54:17.185797 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"39f0cf1c-6bdf-4c85-90ed-b49e620682af","Type":"ContainerStarted","Data":"bf9ee16ce5378612f0ac09baeb00452735bce38426ee0fa803fa49ac900c6be3"} Jan 23 06:54:17 crc kubenswrapper[4937]: I0123 06:54:17.186943 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zjhvz" event={"ID":"f77dc563-57f2-4c47-a627-98d15343173b","Type":"ContainerStarted","Data":"faa4e6e118e77d5800e6dd9fc249731486dcea0bab345bb088f46dddba5f9fbf"} Jan 23 06:54:17 crc kubenswrapper[4937]: I0123 06:54:17.187677 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d82e186e-8995-43ff-a65f-a1918f8495bf","Type":"ContainerStarted","Data":"5058d9f053dcf537e0f32faa9b8bc73dbc4f1b89d288239d78c30b4e5490254f"} Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.087083 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.118504 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57b9678b9-lvjkg"] Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.150016 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.182677 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-679cdd6dd9-c2c84"] Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.184200 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.188546 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-679cdd6dd9-c2c84"] Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.188860 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed40d957-d6c3-48f4-9255-576482c38569-horizon-secret-key\") pod \"horizon-679cdd6dd9-c2c84\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.188906 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed40d957-d6c3-48f4-9255-576482c38569-config-data\") pod \"horizon-679cdd6dd9-c2c84\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.188926 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdh7j\" (UniqueName: \"kubernetes.io/projected/ed40d957-d6c3-48f4-9255-576482c38569-kube-api-access-gdh7j\") pod \"horizon-679cdd6dd9-c2c84\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.188989 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed40d957-d6c3-48f4-9255-576482c38569-scripts\") pod \"horizon-679cdd6dd9-c2c84\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.189047 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed40d957-d6c3-48f4-9255-576482c38569-logs\") pod \"horizon-679cdd6dd9-c2c84\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.255017 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xgx9n" event={"ID":"f2537508-9450-4931-b4a4-d87cfdaa4a77","Type":"ContainerStarted","Data":"76b5749b3524d08f65f214cc3f03f07a43470a7421af911d91d8855745305eb1"} Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.259737 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xt5f4" event={"ID":"43768d25-8cd7-4078-baa6-7afc37a1c276","Type":"ContainerStarted","Data":"ee5c4e621e16a9a656d2eaa91903c3b1165ca55a181e43ce4ef8bd0012b061f5"} Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.266056 4937 generic.go:334] "Generic (PLEG): container finished" podID="90051c59-196e-4c8f-8ea6-31c1c4ce8cc0" containerID="36681108b82fe6a456b53aa0b7c6b1916bd9c61e32bce3a5b066f338a697efb0" exitCode=0 Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.266303 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" event={"ID":"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0","Type":"ContainerDied","Data":"36681108b82fe6a456b53aa0b7c6b1916bd9c61e32bce3a5b066f338a697efb0"} Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.270004 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-xgx9n" podStartSLOduration=4.269990279 podStartE2EDuration="4.269990279s" podCreationTimestamp="2026-01-23 06:54:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:54:18.269351492 +0000 UTC m=+1258.073118145" watchObservedRunningTime="2026-01-23 06:54:18.269990279 +0000 UTC m=+1258.073756932" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.288886 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"39f0cf1c-6bdf-4c85-90ed-b49e620682af","Type":"ContainerStarted","Data":"cff86a88c4bcd75f25fa5f6a13b3f6b6050cd55c13c03b816eb5fd2330c90166"} Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.288939 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"39f0cf1c-6bdf-4c85-90ed-b49e620682af","Type":"ContainerStarted","Data":"8b5b06273b4f3a4b7a37a9a891cd6fb16ad4e7b3ea224ec5e4573e386ae7ac37"} Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.289144 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="39f0cf1c-6bdf-4c85-90ed-b49e620682af" containerName="watcher-api-log" containerID="cri-o://8b5b06273b4f3a4b7a37a9a891cd6fb16ad4e7b3ea224ec5e4573e386ae7ac37" gracePeriod=30 Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.289650 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="39f0cf1c-6bdf-4c85-90ed-b49e620682af" containerName="watcher-api" containerID="cri-o://cff86a88c4bcd75f25fa5f6a13b3f6b6050cd55c13c03b816eb5fd2330c90166" gracePeriod=30 Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.291972 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="39f0cf1c-6bdf-4c85-90ed-b49e620682af" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.156:9322/\": dial tcp 10.217.0.156:9322: connect: connection refused" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.292430 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.294872 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed40d957-d6c3-48f4-9255-576482c38569-logs\") pod \"horizon-679cdd6dd9-c2c84\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.294990 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed40d957-d6c3-48f4-9255-576482c38569-horizon-secret-key\") pod \"horizon-679cdd6dd9-c2c84\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.295030 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed40d957-d6c3-48f4-9255-576482c38569-config-data\") pod \"horizon-679cdd6dd9-c2c84\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.295051 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdh7j\" (UniqueName: \"kubernetes.io/projected/ed40d957-d6c3-48f4-9255-576482c38569-kube-api-access-gdh7j\") pod \"horizon-679cdd6dd9-c2c84\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.295186 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed40d957-d6c3-48f4-9255-576482c38569-scripts\") pod \"horizon-679cdd6dd9-c2c84\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.297080 4937 generic.go:334] "Generic (PLEG): container finished" podID="93868e10-2b6e-4517-a01e-885cdd05fbd8" containerID="46e579f078c9fa6c6f6a68b3dabbda2f33340e1352766bc05dc842834de1c737" exitCode=0 Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.297137 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6664758949-2wffl" event={"ID":"93868e10-2b6e-4517-a01e-885cdd05fbd8","Type":"ContainerDied","Data":"46e579f078c9fa6c6f6a68b3dabbda2f33340e1352766bc05dc842834de1c737"} Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.305374 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed40d957-d6c3-48f4-9255-576482c38569-logs\") pod \"horizon-679cdd6dd9-c2c84\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.313377 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed40d957-d6c3-48f4-9255-576482c38569-horizon-secret-key\") pod \"horizon-679cdd6dd9-c2c84\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.316778 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed40d957-d6c3-48f4-9255-576482c38569-config-data\") pod \"horizon-679cdd6dd9-c2c84\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.326299 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdh7j\" (UniqueName: \"kubernetes.io/projected/ed40d957-d6c3-48f4-9255-576482c38569-kube-api-access-gdh7j\") pod \"horizon-679cdd6dd9-c2c84\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.330517 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed40d957-d6c3-48f4-9255-576482c38569-scripts\") pod \"horizon-679cdd6dd9-c2c84\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.354314 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xt5f4" podStartSLOduration=4.354297913 podStartE2EDuration="4.354297913s" podCreationTimestamp="2026-01-23 06:54:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:54:18.293216058 +0000 UTC m=+1258.096982711" watchObservedRunningTime="2026-01-23 06:54:18.354297913 +0000 UTC m=+1258.158064566" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.393483 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=4.393466124 podStartE2EDuration="4.393466124s" podCreationTimestamp="2026-01-23 06:54:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:54:18.374176811 +0000 UTC m=+1258.177943484" watchObservedRunningTime="2026-01-23 06:54:18.393466124 +0000 UTC m=+1258.197232777" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.456992 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-dwbr9"] Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.459707 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dwbr9" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.462253 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.484435 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dwbr9"] Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.520750 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.600946 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49b94\" (UniqueName: \"kubernetes.io/projected/2caeb6b9-b501-4ac8-9f28-e3b5a40049cc-kube-api-access-49b94\") pod \"root-account-create-update-dwbr9\" (UID: \"2caeb6b9-b501-4ac8-9f28-e3b5a40049cc\") " pod="openstack/root-account-create-update-dwbr9" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.600995 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2caeb6b9-b501-4ac8-9f28-e3b5a40049cc-operator-scripts\") pod \"root-account-create-update-dwbr9\" (UID: \"2caeb6b9-b501-4ac8-9f28-e3b5a40049cc\") " pod="openstack/root-account-create-update-dwbr9" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.703479 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2caeb6b9-b501-4ac8-9f28-e3b5a40049cc-operator-scripts\") pod \"root-account-create-update-dwbr9\" (UID: \"2caeb6b9-b501-4ac8-9f28-e3b5a40049cc\") " pod="openstack/root-account-create-update-dwbr9" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.703721 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49b94\" (UniqueName: \"kubernetes.io/projected/2caeb6b9-b501-4ac8-9f28-e3b5a40049cc-kube-api-access-49b94\") pod \"root-account-create-update-dwbr9\" (UID: \"2caeb6b9-b501-4ac8-9f28-e3b5a40049cc\") " pod="openstack/root-account-create-update-dwbr9" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.704717 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2caeb6b9-b501-4ac8-9f28-e3b5a40049cc-operator-scripts\") pod \"root-account-create-update-dwbr9\" (UID: \"2caeb6b9-b501-4ac8-9f28-e3b5a40049cc\") " pod="openstack/root-account-create-update-dwbr9" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.724074 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49b94\" (UniqueName: \"kubernetes.io/projected/2caeb6b9-b501-4ac8-9f28-e3b5a40049cc-kube-api-access-49b94\") pod \"root-account-create-update-dwbr9\" (UID: \"2caeb6b9-b501-4ac8-9f28-e3b5a40049cc\") " pod="openstack/root-account-create-update-dwbr9" Jan 23 06:54:18 crc kubenswrapper[4937]: I0123 06:54:18.783670 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dwbr9" Jan 23 06:54:19 crc kubenswrapper[4937]: I0123 06:54:19.307693 4937 generic.go:334] "Generic (PLEG): container finished" podID="39f0cf1c-6bdf-4c85-90ed-b49e620682af" containerID="8b5b06273b4f3a4b7a37a9a891cd6fb16ad4e7b3ea224ec5e4573e386ae7ac37" exitCode=143 Jan 23 06:54:19 crc kubenswrapper[4937]: I0123 06:54:19.307873 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"39f0cf1c-6bdf-4c85-90ed-b49e620682af","Type":"ContainerDied","Data":"8b5b06273b4f3a4b7a37a9a891cd6fb16ad4e7b3ea224ec5e4573e386ae7ac37"} Jan 23 06:54:20 crc kubenswrapper[4937]: I0123 06:54:20.321766 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 23 06:54:22 crc kubenswrapper[4937]: I0123 06:54:22.675761 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 23 06:54:23 crc kubenswrapper[4937]: I0123 06:54:23.870755 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f5ff7ccd9-lvv8r"] Jan 23 06:54:23 crc kubenswrapper[4937]: I0123 06:54:23.887926 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8676986cc8-dkgvq"] Jan 23 06:54:23 crc kubenswrapper[4937]: I0123 06:54:23.891544 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:23 crc kubenswrapper[4937]: I0123 06:54:23.898791 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 23 06:54:23 crc kubenswrapper[4937]: I0123 06:54:23.905820 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8676986cc8-dkgvq"] Jan 23 06:54:23 crc kubenswrapper[4937]: I0123 06:54:23.959963 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-679cdd6dd9-c2c84"] Jan 23 06:54:23 crc kubenswrapper[4937]: I0123 06:54:23.979060 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-57b9fd85d8-54qml"] Jan 23 06:54:23 crc kubenswrapper[4937]: I0123 06:54:23.980562 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.003119 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57b9fd85d8-54qml"] Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.047997 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d71c9df1-567e-4dd0-be98-fb63b23ebca7-scripts\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.048068 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d71c9df1-567e-4dd0-be98-fb63b23ebca7-horizon-secret-key\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.048098 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71c9df1-567e-4dd0-be98-fb63b23ebca7-combined-ca-bundle\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.048124 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d71c9df1-567e-4dd0-be98-fb63b23ebca7-logs\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.048345 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d71c9df1-567e-4dd0-be98-fb63b23ebca7-horizon-tls-certs\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.048484 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d71c9df1-567e-4dd0-be98-fb63b23ebca7-config-data\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.048550 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6ph2\" (UniqueName: \"kubernetes.io/projected/d71c9df1-567e-4dd0-be98-fb63b23ebca7-kube-api-access-q6ph2\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.150565 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-config-data\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.150667 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-logs\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.150697 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-scripts\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.150731 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d71c9df1-567e-4dd0-be98-fb63b23ebca7-scripts\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.150951 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d71c9df1-567e-4dd0-be98-fb63b23ebca7-horizon-secret-key\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.151028 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71c9df1-567e-4dd0-be98-fb63b23ebca7-combined-ca-bundle\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.151061 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-combined-ca-bundle\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.151116 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d71c9df1-567e-4dd0-be98-fb63b23ebca7-logs\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.151136 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vpkv\" (UniqueName: \"kubernetes.io/projected/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-kube-api-access-7vpkv\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.151198 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-horizon-tls-certs\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.151233 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d71c9df1-567e-4dd0-be98-fb63b23ebca7-horizon-tls-certs\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.151284 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d71c9df1-567e-4dd0-be98-fb63b23ebca7-config-data\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.151322 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6ph2\" (UniqueName: \"kubernetes.io/projected/d71c9df1-567e-4dd0-be98-fb63b23ebca7-kube-api-access-q6ph2\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.151341 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-horizon-secret-key\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.151432 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d71c9df1-567e-4dd0-be98-fb63b23ebca7-scripts\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.151709 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d71c9df1-567e-4dd0-be98-fb63b23ebca7-logs\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.152674 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d71c9df1-567e-4dd0-be98-fb63b23ebca7-config-data\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.160144 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d71c9df1-567e-4dd0-be98-fb63b23ebca7-horizon-tls-certs\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.160192 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71c9df1-567e-4dd0-be98-fb63b23ebca7-combined-ca-bundle\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.160561 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d71c9df1-567e-4dd0-be98-fb63b23ebca7-horizon-secret-key\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.169400 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6ph2\" (UniqueName: \"kubernetes.io/projected/d71c9df1-567e-4dd0-be98-fb63b23ebca7-kube-api-access-q6ph2\") pod \"horizon-8676986cc8-dkgvq\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.224860 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.253153 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-config-data\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.254427 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-logs\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.254459 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-scripts\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.254557 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-combined-ca-bundle\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.254611 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vpkv\" (UniqueName: \"kubernetes.io/projected/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-kube-api-access-7vpkv\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.254656 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-horizon-tls-certs\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.254714 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-horizon-secret-key\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.254354 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-config-data\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.255801 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-logs\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.257447 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-scripts\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.258504 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-horizon-secret-key\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.259498 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-horizon-tls-certs\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.259852 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-combined-ca-bundle\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.274564 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vpkv\" (UniqueName: \"kubernetes.io/projected/6ac78d23-24ea-411c-ba2e-714e3f3fb5d2-kube-api-access-7vpkv\") pod \"horizon-57b9fd85d8-54qml\" (UID: \"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2\") " pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:24 crc kubenswrapper[4937]: I0123 06:54:24.299526 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:54:25 crc kubenswrapper[4937]: I0123 06:54:25.392347 4937 generic.go:334] "Generic (PLEG): container finished" podID="43768d25-8cd7-4078-baa6-7afc37a1c276" containerID="ee5c4e621e16a9a656d2eaa91903c3b1165ca55a181e43ce4ef8bd0012b061f5" exitCode=0 Jan 23 06:54:25 crc kubenswrapper[4937]: I0123 06:54:25.392405 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xt5f4" event={"ID":"43768d25-8cd7-4078-baa6-7afc37a1c276","Type":"ContainerDied","Data":"ee5c4e621e16a9a656d2eaa91903c3b1165ca55a181e43ce4ef8bd0012b061f5"} Jan 23 06:54:28 crc kubenswrapper[4937]: E0123 06:54:28.751051 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 23 06:54:28 crc kubenswrapper[4937]: E0123 06:54:28.751538 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 23 06:54:28 crc kubenswrapper[4937]: E0123 06:54:28.751698 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.44:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp4rn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-2n4zb_openstack(e02394c2-2975-4589-9670-7c69fa89cb1d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:54:28 crc kubenswrapper[4937]: E0123 06:54:28.752943 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-2n4zb" podUID="e02394c2-2975-4589-9670-7c69fa89cb1d" Jan 23 06:54:28 crc kubenswrapper[4937]: I0123 06:54:28.833541 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:28 crc kubenswrapper[4937]: I0123 06:54:28.983244 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-scripts\") pod \"43768d25-8cd7-4078-baa6-7afc37a1c276\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " Jan 23 06:54:28 crc kubenswrapper[4937]: I0123 06:54:28.983292 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-combined-ca-bundle\") pod \"43768d25-8cd7-4078-baa6-7afc37a1c276\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " Jan 23 06:54:28 crc kubenswrapper[4937]: I0123 06:54:28.983357 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-fernet-keys\") pod \"43768d25-8cd7-4078-baa6-7afc37a1c276\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " Jan 23 06:54:28 crc kubenswrapper[4937]: I0123 06:54:28.983391 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2lbx\" (UniqueName: \"kubernetes.io/projected/43768d25-8cd7-4078-baa6-7afc37a1c276-kube-api-access-g2lbx\") pod \"43768d25-8cd7-4078-baa6-7afc37a1c276\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " Jan 23 06:54:28 crc kubenswrapper[4937]: I0123 06:54:28.983416 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-credential-keys\") pod \"43768d25-8cd7-4078-baa6-7afc37a1c276\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " Jan 23 06:54:28 crc kubenswrapper[4937]: I0123 06:54:28.983437 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-config-data\") pod \"43768d25-8cd7-4078-baa6-7afc37a1c276\" (UID: \"43768d25-8cd7-4078-baa6-7afc37a1c276\") " Jan 23 06:54:28 crc kubenswrapper[4937]: I0123 06:54:28.987545 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-scripts" (OuterVolumeSpecName: "scripts") pod "43768d25-8cd7-4078-baa6-7afc37a1c276" (UID: "43768d25-8cd7-4078-baa6-7afc37a1c276"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:54:28 crc kubenswrapper[4937]: I0123 06:54:28.987947 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "43768d25-8cd7-4078-baa6-7afc37a1c276" (UID: "43768d25-8cd7-4078-baa6-7afc37a1c276"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:54:28 crc kubenswrapper[4937]: I0123 06:54:28.987982 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "43768d25-8cd7-4078-baa6-7afc37a1c276" (UID: "43768d25-8cd7-4078-baa6-7afc37a1c276"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:54:28 crc kubenswrapper[4937]: I0123 06:54:28.988508 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43768d25-8cd7-4078-baa6-7afc37a1c276-kube-api-access-g2lbx" (OuterVolumeSpecName: "kube-api-access-g2lbx") pod "43768d25-8cd7-4078-baa6-7afc37a1c276" (UID: "43768d25-8cd7-4078-baa6-7afc37a1c276"). InnerVolumeSpecName "kube-api-access-g2lbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:54:29 crc kubenswrapper[4937]: I0123 06:54:29.009082 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43768d25-8cd7-4078-baa6-7afc37a1c276" (UID: "43768d25-8cd7-4078-baa6-7afc37a1c276"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:54:29 crc kubenswrapper[4937]: I0123 06:54:29.022159 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-config-data" (OuterVolumeSpecName: "config-data") pod "43768d25-8cd7-4078-baa6-7afc37a1c276" (UID: "43768d25-8cd7-4078-baa6-7afc37a1c276"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:54:29 crc kubenswrapper[4937]: I0123 06:54:29.085562 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:29 crc kubenswrapper[4937]: I0123 06:54:29.085736 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:29 crc kubenswrapper[4937]: I0123 06:54:29.085751 4937 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:29 crc kubenswrapper[4937]: I0123 06:54:29.085762 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2lbx\" (UniqueName: \"kubernetes.io/projected/43768d25-8cd7-4078-baa6-7afc37a1c276-kube-api-access-g2lbx\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:29 crc kubenswrapper[4937]: I0123 06:54:29.085773 4937 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:29 crc kubenswrapper[4937]: I0123 06:54:29.085784 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43768d25-8cd7-4078-baa6-7afc37a1c276-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:29 crc kubenswrapper[4937]: I0123 06:54:29.427720 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xt5f4" Jan 23 06:54:29 crc kubenswrapper[4937]: I0123 06:54:29.427706 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xt5f4" event={"ID":"43768d25-8cd7-4078-baa6-7afc37a1c276","Type":"ContainerDied","Data":"d6cba724ab5687620111a8a299b66b457e389e2f5361d8c4de74d7f4d230e3f0"} Jan 23 06:54:29 crc kubenswrapper[4937]: I0123 06:54:29.428075 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6cba724ab5687620111a8a299b66b457e389e2f5361d8c4de74d7f4d230e3f0" Jan 23 06:54:29 crc kubenswrapper[4937]: E0123 06:54:29.429222 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.44:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-2n4zb" podUID="e02394c2-2975-4589-9670-7c69fa89cb1d" Jan 23 06:54:29 crc kubenswrapper[4937]: I0123 06:54:29.914232 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xt5f4"] Jan 23 06:54:29 crc kubenswrapper[4937]: I0123 06:54:29.925279 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xt5f4"] Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.015710 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-srnpk"] Jan 23 06:54:30 crc kubenswrapper[4937]: E0123 06:54:30.016231 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43768d25-8cd7-4078-baa6-7afc37a1c276" containerName="keystone-bootstrap" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.016249 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="43768d25-8cd7-4078-baa6-7afc37a1c276" containerName="keystone-bootstrap" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.016429 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="43768d25-8cd7-4078-baa6-7afc37a1c276" containerName="keystone-bootstrap" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.017073 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.020001 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.020059 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.020001 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.020234 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mg5cr" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.020249 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.023634 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-srnpk"] Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.213774 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-config-data\") pod \"keystone-bootstrap-srnpk\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.214004 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-credential-keys\") pod \"keystone-bootstrap-srnpk\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.214558 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-combined-ca-bundle\") pod \"keystone-bootstrap-srnpk\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.214733 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn2kl\" (UniqueName: \"kubernetes.io/projected/deb165f5-08a0-404a-88b3-f89dc3195c28-kube-api-access-pn2kl\") pod \"keystone-bootstrap-srnpk\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.214791 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-scripts\") pod \"keystone-bootstrap-srnpk\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.214913 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-fernet-keys\") pod \"keystone-bootstrap-srnpk\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.316704 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-scripts\") pod \"keystone-bootstrap-srnpk\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.317064 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-fernet-keys\") pod \"keystone-bootstrap-srnpk\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.317236 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-config-data\") pod \"keystone-bootstrap-srnpk\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.317371 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-credential-keys\") pod \"keystone-bootstrap-srnpk\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.317494 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-combined-ca-bundle\") pod \"keystone-bootstrap-srnpk\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.317690 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn2kl\" (UniqueName: \"kubernetes.io/projected/deb165f5-08a0-404a-88b3-f89dc3195c28-kube-api-access-pn2kl\") pod \"keystone-bootstrap-srnpk\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.322911 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-config-data\") pod \"keystone-bootstrap-srnpk\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.323083 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-fernet-keys\") pod \"keystone-bootstrap-srnpk\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.324377 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-scripts\") pod \"keystone-bootstrap-srnpk\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.324738 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-credential-keys\") pod \"keystone-bootstrap-srnpk\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.326640 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-combined-ca-bundle\") pod \"keystone-bootstrap-srnpk\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.335795 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn2kl\" (UniqueName: \"kubernetes.io/projected/deb165f5-08a0-404a-88b3-f89dc3195c28-kube-api-access-pn2kl\") pod \"keystone-bootstrap-srnpk\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.344875 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.533940 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.542812 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43768d25-8cd7-4078-baa6-7afc37a1c276" path="/var/lib/kubelet/pods/43768d25-8cd7-4078-baa6-7afc37a1c276/volumes" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.622884 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-dns-svc\") pod \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.622946 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-config\") pod \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.622989 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-ovsdbserver-sb\") pod \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.623052 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-dns-swift-storage-0\") pod \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.623090 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj7tt\" (UniqueName: \"kubernetes.io/projected/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-kube-api-access-mj7tt\") pod \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.623157 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-ovsdbserver-nb\") pod \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\" (UID: \"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0\") " Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.626787 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-kube-api-access-mj7tt" (OuterVolumeSpecName: "kube-api-access-mj7tt") pod "90051c59-196e-4c8f-8ea6-31c1c4ce8cc0" (UID: "90051c59-196e-4c8f-8ea6-31c1c4ce8cc0"). InnerVolumeSpecName "kube-api-access-mj7tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.645182 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "90051c59-196e-4c8f-8ea6-31c1c4ce8cc0" (UID: "90051c59-196e-4c8f-8ea6-31c1c4ce8cc0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.646635 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-config" (OuterVolumeSpecName: "config") pod "90051c59-196e-4c8f-8ea6-31c1c4ce8cc0" (UID: "90051c59-196e-4c8f-8ea6-31c1c4ce8cc0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.651071 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "90051c59-196e-4c8f-8ea6-31c1c4ce8cc0" (UID: "90051c59-196e-4c8f-8ea6-31c1c4ce8cc0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.660517 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "90051c59-196e-4c8f-8ea6-31c1c4ce8cc0" (UID: "90051c59-196e-4c8f-8ea6-31c1c4ce8cc0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.671787 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "90051c59-196e-4c8f-8ea6-31c1c4ce8cc0" (UID: "90051c59-196e-4c8f-8ea6-31c1c4ce8cc0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.725209 4937 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.725528 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.725674 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.725825 4937 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.725988 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj7tt\" (UniqueName: \"kubernetes.io/projected/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-kube-api-access-mj7tt\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:30 crc kubenswrapper[4937]: I0123 06:54:30.726111 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:31 crc kubenswrapper[4937]: I0123 06:54:31.451656 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" event={"ID":"90051c59-196e-4c8f-8ea6-31c1c4ce8cc0","Type":"ContainerDied","Data":"98577f310f26ab7aac7154ea21ef04e219d61553aae1e206bcb0f5261adbeac9"} Jan 23 06:54:31 crc kubenswrapper[4937]: I0123 06:54:31.451729 4937 scope.go:117] "RemoveContainer" containerID="36681108b82fe6a456b53aa0b7c6b1916bd9c61e32bce3a5b066f338a697efb0" Jan 23 06:54:31 crc kubenswrapper[4937]: I0123 06:54:31.451732 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57b64ddbd9-fhncp" Jan 23 06:54:31 crc kubenswrapper[4937]: I0123 06:54:31.509139 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57b64ddbd9-fhncp"] Jan 23 06:54:31 crc kubenswrapper[4937]: I0123 06:54:31.519171 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57b64ddbd9-fhncp"] Jan 23 06:54:32 crc kubenswrapper[4937]: I0123 06:54:32.536501 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90051c59-196e-4c8f-8ea6-31c1c4ce8cc0" path="/var/lib/kubelet/pods/90051c59-196e-4c8f-8ea6-31c1c4ce8cc0/volumes" Jan 23 06:54:37 crc kubenswrapper[4937]: I0123 06:54:37.724693 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:54:37 crc kubenswrapper[4937]: I0123 06:54:37.725492 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:54:42 crc kubenswrapper[4937]: I0123 06:54:42.529895 4937 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 06:54:45 crc kubenswrapper[4937]: E0123 06:54:45.379921 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 23 06:54:45 crc kubenswrapper[4937]: E0123 06:54:45.380565 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 23 06:54:45 crc kubenswrapper[4937]: E0123 06:54:45.380858 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.44:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c4hb6hbbh5b9h646h5b7hb5h645hcfh569h95h84h595h55ch5fbhd8h5d4h648h5dhbch67fh66dh547h5c7h578h649hc5h76h67h5dh6fh547q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gftpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-57b9678b9-lvjkg_openstack(7830585d-2333-44d0-8d49-284d149ded04): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:54:45 crc kubenswrapper[4937]: E0123 06:54:45.383746 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.44:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-57b9678b9-lvjkg" podUID="7830585d-2333-44d0-8d49-284d149ded04" Jan 23 06:54:46 crc kubenswrapper[4937]: E0123 06:54:46.910029 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Jan 23 06:54:46 crc kubenswrapper[4937]: E0123 06:54:46.910377 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Jan 23 06:54:46 crc kubenswrapper[4937]: E0123 06:54:46.910500 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:38.102.83.44:5001/podified-master-centos10/openstack-placement-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2tck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-gbhl4_openstack(71dcc2c8-b02c-4203-9fa8-1af6e12615d4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:54:46 crc kubenswrapper[4937]: E0123 06:54:46.912143 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-gbhl4" podUID="71dcc2c8-b02c-4203-9fa8-1af6e12615d4" Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.018796 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57b9678b9-lvjkg" Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.060161 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7830585d-2333-44d0-8d49-284d149ded04-logs\") pod \"7830585d-2333-44d0-8d49-284d149ded04\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.060222 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7830585d-2333-44d0-8d49-284d149ded04-scripts\") pod \"7830585d-2333-44d0-8d49-284d149ded04\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.060257 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7830585d-2333-44d0-8d49-284d149ded04-config-data\") pod \"7830585d-2333-44d0-8d49-284d149ded04\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.060394 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gftpv\" (UniqueName: \"kubernetes.io/projected/7830585d-2333-44d0-8d49-284d149ded04-kube-api-access-gftpv\") pod \"7830585d-2333-44d0-8d49-284d149ded04\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.060426 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7830585d-2333-44d0-8d49-284d149ded04-horizon-secret-key\") pod \"7830585d-2333-44d0-8d49-284d149ded04\" (UID: \"7830585d-2333-44d0-8d49-284d149ded04\") " Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.061066 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7830585d-2333-44d0-8d49-284d149ded04-logs" (OuterVolumeSpecName: "logs") pod "7830585d-2333-44d0-8d49-284d149ded04" (UID: "7830585d-2333-44d0-8d49-284d149ded04"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.061815 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7830585d-2333-44d0-8d49-284d149ded04-config-data" (OuterVolumeSpecName: "config-data") pod "7830585d-2333-44d0-8d49-284d149ded04" (UID: "7830585d-2333-44d0-8d49-284d149ded04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.062252 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7830585d-2333-44d0-8d49-284d149ded04-scripts" (OuterVolumeSpecName: "scripts") pod "7830585d-2333-44d0-8d49-284d149ded04" (UID: "7830585d-2333-44d0-8d49-284d149ded04"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.070704 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7830585d-2333-44d0-8d49-284d149ded04-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7830585d-2333-44d0-8d49-284d149ded04" (UID: "7830585d-2333-44d0-8d49-284d149ded04"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.070748 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7830585d-2333-44d0-8d49-284d149ded04-kube-api-access-gftpv" (OuterVolumeSpecName: "kube-api-access-gftpv") pod "7830585d-2333-44d0-8d49-284d149ded04" (UID: "7830585d-2333-44d0-8d49-284d149ded04"). InnerVolumeSpecName "kube-api-access-gftpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.162190 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gftpv\" (UniqueName: \"kubernetes.io/projected/7830585d-2333-44d0-8d49-284d149ded04-kube-api-access-gftpv\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.162224 4937 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7830585d-2333-44d0-8d49-284d149ded04-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.162234 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7830585d-2333-44d0-8d49-284d149ded04-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.162243 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7830585d-2333-44d0-8d49-284d149ded04-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.162250 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7830585d-2333-44d0-8d49-284d149ded04-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:54:47 crc kubenswrapper[4937]: E0123 06:54:47.230974 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-watcher-applier:watcher_latest" Jan 23 06:54:47 crc kubenswrapper[4937]: E0123 06:54:47.231079 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-watcher-applier:watcher_latest" Jan 23 06:54:47 crc kubenswrapper[4937]: E0123 06:54:47.231249 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-applier,Image:38.102.83.44:5001/podified-master-centos10/openstack-watcher-applier:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc8h695h59ch86h5c5h665h57ch86h58fh67bhffh55dh697h685h5f4h5dbh5c7h64ch594h574h77hc4hbh75hcch66fh5fch689h59hbch5fch679q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-applier-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/watcher,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfhb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -r DRST watcher-applier],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -r DRST watcher-applier],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42451,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -r DRST watcher-applier],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-applier-0_openstack(51ed91bb-3033-4e77-8378-6cbdb382dc98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:54:47 crc kubenswrapper[4937]: E0123 06:54:47.232774 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-applier-0" podUID="51ed91bb-3033-4e77-8378-6cbdb382dc98" Jan 23 06:54:47 crc kubenswrapper[4937]: E0123 06:54:47.472931 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-watcher-decision-engine:watcher_latest" Jan 23 06:54:47 crc kubenswrapper[4937]: E0123 06:54:47.473003 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-watcher-decision-engine:watcher_latest" Jan 23 06:54:47 crc kubenswrapper[4937]: E0123 06:54:47.473195 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-decision-engine,Image:38.102.83.44:5001/podified-master-centos10/openstack-watcher-decision-engine:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f5h588h646h549h67bh559h588h5f4h5c5hc4h5dbh5bbh686h596hdbhf8h5ch5dfhd8h68ch9ch59dh677h55dhcfh5d8hd8hcdh678h647h5b9h679q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-decision-engine-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/watcher,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:custom-prometheus-ca,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/prometheus/ca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8gfg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -f -r DRST watcher-decision-engine],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -f -r DRST watcher-decision-engine],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42451,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -f -r DRST watcher-decision-engine],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-decision-engine-0_openstack(52ed7619-5046-4b7c-b8a1-458d6831b92b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:54:47 crc kubenswrapper[4937]: E0123 06:54:47.474493 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-decision-engine-0" podUID="52ed7619-5046-4b7c-b8a1-458d6831b92b" Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.624554 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57b9678b9-lvjkg" event={"ID":"7830585d-2333-44d0-8d49-284d149ded04","Type":"ContainerDied","Data":"4630bc30c5834d82b1733b92148a58c1e5c03ee370c2e01abc9b9469422872eb"} Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.624583 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57b9678b9-lvjkg" Jan 23 06:54:47 crc kubenswrapper[4937]: E0123 06:54:47.627385 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.44:5001/podified-master-centos10/openstack-placement-api:watcher_latest\\\"\"" pod="openstack/placement-db-sync-gbhl4" podUID="71dcc2c8-b02c-4203-9fa8-1af6e12615d4" Jan 23 06:54:47 crc kubenswrapper[4937]: E0123 06:54:47.627769 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.44:5001/podified-master-centos10/openstack-watcher-applier:watcher_latest\\\"\"" pod="openstack/watcher-applier-0" podUID="51ed91bb-3033-4e77-8378-6cbdb382dc98" Jan 23 06:54:47 crc kubenswrapper[4937]: E0123 06:54:47.629884 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.44:5001/podified-master-centos10/openstack-watcher-decision-engine:watcher_latest\\\"\"" pod="openstack/watcher-decision-engine-0" podUID="52ed7619-5046-4b7c-b8a1-458d6831b92b" Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.743669 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57b9678b9-lvjkg"] Jan 23 06:54:47 crc kubenswrapper[4937]: I0123 06:54:47.772134 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-57b9678b9-lvjkg"] Jan 23 06:54:48 crc kubenswrapper[4937]: I0123 06:54:48.542233 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7830585d-2333-44d0-8d49-284d149ded04" path="/var/lib/kubelet/pods/7830585d-2333-44d0-8d49-284d149ded04/volumes" Jan 23 06:54:48 crc kubenswrapper[4937]: I0123 06:54:48.635972 4937 generic.go:334] "Generic (PLEG): container finished" podID="39f0cf1c-6bdf-4c85-90ed-b49e620682af" containerID="cff86a88c4bcd75f25fa5f6a13b3f6b6050cd55c13c03b816eb5fd2330c90166" exitCode=137 Jan 23 06:54:48 crc kubenswrapper[4937]: I0123 06:54:48.636016 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"39f0cf1c-6bdf-4c85-90ed-b49e620682af","Type":"ContainerDied","Data":"cff86a88c4bcd75f25fa5f6a13b3f6b6050cd55c13c03b816eb5fd2330c90166"} Jan 23 06:54:50 crc kubenswrapper[4937]: I0123 06:54:50.322489 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="39f0cf1c-6bdf-4c85-90ed-b49e620682af" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.156:9322/\": dial tcp 10.217.0.156:9322: connect: connection refused" Jan 23 06:54:55 crc kubenswrapper[4937]: I0123 06:54:55.321981 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="39f0cf1c-6bdf-4c85-90ed-b49e620682af" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.156:9322/\": dial tcp 10.217.0.156:9322: connect: connection refused" Jan 23 06:55:00 crc kubenswrapper[4937]: I0123 06:55:00.322802 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="39f0cf1c-6bdf-4c85-90ed-b49e620682af" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.156:9322/\": dial tcp 10.217.0.156:9322: connect: connection refused" Jan 23 06:55:00 crc kubenswrapper[4937]: I0123 06:55:00.323747 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 23 06:55:01 crc kubenswrapper[4937]: E0123 06:55:01.133831 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:77a8c8d5b4c2f289ee77953d53824dcecffc2c2128da9d18233f50bd32061180: Get \"http://38.102.83.44:5001/v2/podified-master-centos10/openstack-ceilometer-central/blobs/sha256:77a8c8d5b4c2f289ee77953d53824dcecffc2c2128da9d18233f50bd32061180\": context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 23 06:55:01 crc kubenswrapper[4937]: E0123 06:55:01.133890 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = reading blob sha256:77a8c8d5b4c2f289ee77953d53824dcecffc2c2128da9d18233f50bd32061180: Get \"http://38.102.83.44:5001/v2/podified-master-centos10/openstack-ceilometer-central/blobs/sha256:77a8c8d5b4c2f289ee77953d53824dcecffc2c2128da9d18233f50bd32061180\": context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Jan 23 06:55:01 crc kubenswrapper[4937]: E0123 06:55:01.134040 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.44:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n6fh657h66ch566h5d7h5bdh647h596h68dhc9h7ch654h5fdh54h687h56ch568h5dch76h674hf9h5b6hf6hbbh5b7h5b9h5b7h6fh564h57fh5c4h57fq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf8qk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d82e186e-8995-43ff-a65f-a1918f8495bf): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:77a8c8d5b4c2f289ee77953d53824dcecffc2c2128da9d18233f50bd32061180: Get \"http://38.102.83.44:5001/v2/podified-master-centos10/openstack-ceilometer-central/blobs/sha256:77a8c8d5b4c2f289ee77953d53824dcecffc2c2128da9d18233f50bd32061180\": context canceled" logger="UnhandledError" Jan 23 06:55:01 crc kubenswrapper[4937]: E0123 06:55:01.144905 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 23 06:55:01 crc kubenswrapper[4937]: E0123 06:55:01.144994 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 23 06:55:01 crc kubenswrapper[4937]: E0123 06:55:01.145210 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.44:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n588h679h665h6bh5f7h5h7bh85h66bhf4h7dh699h58bh565hbch64dh688h8fh56dhf6h54dhfch585h5c8h57fh578h5c4h568hc4h5b8h59dh58cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54t5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7f5ff7ccd9-lvv8r_openstack(5d244d9a-0e45-4115-a96a-fa7e2885c6f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:55:01 crc kubenswrapper[4937]: E0123 06:55:01.151399 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.44:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-7f5ff7ccd9-lvv8r" podUID="5d244d9a-0e45-4115-a96a-fa7e2885c6f4" Jan 23 06:55:02 crc kubenswrapper[4937]: E0123 06:55:02.110941 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 23 06:55:02 crc kubenswrapper[4937]: E0123 06:55:02.111110 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 23 06:55:02 crc kubenswrapper[4937]: E0123 06:55:02.111221 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.44:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp4rn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-2n4zb_openstack(e02394c2-2975-4589-9670-7c69fa89cb1d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:55:02 crc kubenswrapper[4937]: E0123 06:55:02.112633 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-2n4zb" podUID="e02394c2-2975-4589-9670-7c69fa89cb1d" Jan 23 06:55:02 crc kubenswrapper[4937]: E0123 06:55:02.117731 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 23 06:55:02 crc kubenswrapper[4937]: E0123 06:55:02.117795 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 23 06:55:02 crc kubenswrapper[4937]: E0123 06:55:02.117920 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.44:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vn7nk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-gkrpw_openstack(47488e25-41b9-46e1-8ad7-5cfe2c4654c7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:55:02 crc kubenswrapper[4937]: E0123 06:55:02.119102 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-gkrpw" podUID="47488e25-41b9-46e1-8ad7-5cfe2c4654c7" Jan 23 06:55:02 crc kubenswrapper[4937]: E0123 06:55:02.752803 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.44:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-gkrpw" podUID="47488e25-41b9-46e1-8ad7-5cfe2c4654c7" Jan 23 06:55:03 crc kubenswrapper[4937]: E0123 06:55:03.361887 4937 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 23 06:55:03 crc kubenswrapper[4937]: E0123 06:55:03.361968 4937 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.44:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 23 06:55:03 crc kubenswrapper[4937]: E0123 06:55:03.362104 4937 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.44:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7g49k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-zjhvz_openstack(f77dc563-57f2-4c47-a627-98d15343173b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 06:55:03 crc kubenswrapper[4937]: E0123 06:55:03.363537 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-zjhvz" podUID="f77dc563-57f2-4c47-a627-98d15343173b" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.530538 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f5ff7ccd9-lvv8r" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.544037 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.593487 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54t5h\" (UniqueName: \"kubernetes.io/projected/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-kube-api-access-54t5h\") pod \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.593557 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f0cf1c-6bdf-4c85-90ed-b49e620682af-logs\") pod \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.594421 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39f0cf1c-6bdf-4c85-90ed-b49e620682af-logs" (OuterVolumeSpecName: "logs") pod "39f0cf1c-6bdf-4c85-90ed-b49e620682af" (UID: "39f0cf1c-6bdf-4c85-90ed-b49e620682af"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.601998 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-kube-api-access-54t5h" (OuterVolumeSpecName: "kube-api-access-54t5h") pod "5d244d9a-0e45-4115-a96a-fa7e2885c6f4" (UID: "5d244d9a-0e45-4115-a96a-fa7e2885c6f4"). InnerVolumeSpecName "kube-api-access-54t5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.695694 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/39f0cf1c-6bdf-4c85-90ed-b49e620682af-custom-prometheus-ca\") pod \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.695751 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-logs\") pod \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.695780 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-horizon-secret-key\") pod \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.695813 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-scripts\") pod \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.695839 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f0cf1c-6bdf-4c85-90ed-b49e620682af-config-data\") pod \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.695861 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwsmr\" (UniqueName: \"kubernetes.io/projected/39f0cf1c-6bdf-4c85-90ed-b49e620682af-kube-api-access-vwsmr\") pod \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.695933 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f0cf1c-6bdf-4c85-90ed-b49e620682af-combined-ca-bundle\") pod \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\" (UID: \"39f0cf1c-6bdf-4c85-90ed-b49e620682af\") " Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.695961 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-config-data\") pod \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\" (UID: \"5d244d9a-0e45-4115-a96a-fa7e2885c6f4\") " Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.696405 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f0cf1c-6bdf-4c85-90ed-b49e620682af-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.696432 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54t5h\" (UniqueName: \"kubernetes.io/projected/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-kube-api-access-54t5h\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.696487 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-logs" (OuterVolumeSpecName: "logs") pod "5d244d9a-0e45-4115-a96a-fa7e2885c6f4" (UID: "5d244d9a-0e45-4115-a96a-fa7e2885c6f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.696857 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-scripts" (OuterVolumeSpecName: "scripts") pod "5d244d9a-0e45-4115-a96a-fa7e2885c6f4" (UID: "5d244d9a-0e45-4115-a96a-fa7e2885c6f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.697201 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-config-data" (OuterVolumeSpecName: "config-data") pod "5d244d9a-0e45-4115-a96a-fa7e2885c6f4" (UID: "5d244d9a-0e45-4115-a96a-fa7e2885c6f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.748644 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39f0cf1c-6bdf-4c85-90ed-b49e620682af-kube-api-access-vwsmr" (OuterVolumeSpecName: "kube-api-access-vwsmr") pod "39f0cf1c-6bdf-4c85-90ed-b49e620682af" (UID: "39f0cf1c-6bdf-4c85-90ed-b49e620682af"). InnerVolumeSpecName "kube-api-access-vwsmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.751894 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5d244d9a-0e45-4115-a96a-fa7e2885c6f4" (UID: "5d244d9a-0e45-4115-a96a-fa7e2885c6f4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.754242 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f0cf1c-6bdf-4c85-90ed-b49e620682af-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "39f0cf1c-6bdf-4c85-90ed-b49e620682af" (UID: "39f0cf1c-6bdf-4c85-90ed-b49e620682af"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.756129 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f0cf1c-6bdf-4c85-90ed-b49e620682af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39f0cf1c-6bdf-4c85-90ed-b49e620682af" (UID: "39f0cf1c-6bdf-4c85-90ed-b49e620682af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.766797 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"39f0cf1c-6bdf-4c85-90ed-b49e620682af","Type":"ContainerDied","Data":"bf9ee16ce5378612f0ac09baeb00452735bce38426ee0fa803fa49ac900c6be3"} Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.766856 4937 scope.go:117] "RemoveContainer" containerID="cff86a88c4bcd75f25fa5f6a13b3f6b6050cd55c13c03b816eb5fd2330c90166" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.766950 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.780730 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f5ff7ccd9-lvv8r" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.786149 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f5ff7ccd9-lvv8r" event={"ID":"5d244d9a-0e45-4115-a96a-fa7e2885c6f4","Type":"ContainerDied","Data":"3ef16ccfc7e0d088f8e46b4d7ef03e0c8f77a7b0963b839ad8f5488596b3b7bc"} Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.794874 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f0cf1c-6bdf-4c85-90ed-b49e620682af-config-data" (OuterVolumeSpecName: "config-data") pod "39f0cf1c-6bdf-4c85-90ed-b49e620682af" (UID: "39f0cf1c-6bdf-4c85-90ed-b49e620682af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.797091 4937 scope.go:117] "RemoveContainer" containerID="8b5b06273b4f3a4b7a37a9a891cd6fb16ad4e7b3ea224ec5e4573e386ae7ac37" Jan 23 06:55:03 crc kubenswrapper[4937]: E0123 06:55:03.797137 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.44:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-zjhvz" podUID="f77dc563-57f2-4c47-a627-98d15343173b" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.798412 4937 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/39f0cf1c-6bdf-4c85-90ed-b49e620682af-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.798434 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.798445 4937 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.798481 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.798491 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f0cf1c-6bdf-4c85-90ed-b49e620682af-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.798501 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwsmr\" (UniqueName: \"kubernetes.io/projected/39f0cf1c-6bdf-4c85-90ed-b49e620682af-kube-api-access-vwsmr\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.798511 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f0cf1c-6bdf-4c85-90ed-b49e620682af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.798520 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5d244d9a-0e45-4115-a96a-fa7e2885c6f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.881276 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57b9fd85d8-54qml"] Jan 23 06:55:03 crc kubenswrapper[4937]: W0123 06:55:03.887831 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded40d957_d6c3_48f4_9255_576482c38569.slice/crio-bd72090f63cfa4c3e04402867dffc9c239355d63d51944ad21dc9ba75196a446 WatchSource:0}: Error finding container bd72090f63cfa4c3e04402867dffc9c239355d63d51944ad21dc9ba75196a446: Status 404 returned error can't find the container with id bd72090f63cfa4c3e04402867dffc9c239355d63d51944ad21dc9ba75196a446 Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.892723 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-679cdd6dd9-c2c84"] Jan 23 06:55:03 crc kubenswrapper[4937]: W0123 06:55:03.895973 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2caeb6b9_b501_4ac8_9f28_e3b5a40049cc.slice/crio-b479ab270ca2beef337edfff935dd82084a734c7f2add6d74e6d7f21768f70a9 WatchSource:0}: Error finding container b479ab270ca2beef337edfff935dd82084a734c7f2add6d74e6d7f21768f70a9: Status 404 returned error can't find the container with id b479ab270ca2beef337edfff935dd82084a734c7f2add6d74e6d7f21768f70a9 Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.904782 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.918882 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f5ff7ccd9-lvv8r"] Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.927987 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7f5ff7ccd9-lvv8r"] Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.937625 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8676986cc8-dkgvq"] Jan 23 06:55:03 crc kubenswrapper[4937]: I0123 06:55:03.946533 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dwbr9"] Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.029669 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-srnpk"] Jan 23 06:55:04 crc kubenswrapper[4937]: W0123 06:55:04.040831 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddeb165f5_08a0_404a_88b3_f89dc3195c28.slice/crio-a9eb20492f27c8293d276d83e82b006155dabdecebd3010976ef804408c78c14 WatchSource:0}: Error finding container a9eb20492f27c8293d276d83e82b006155dabdecebd3010976ef804408c78c14: Status 404 returned error can't find the container with id a9eb20492f27c8293d276d83e82b006155dabdecebd3010976ef804408c78c14 Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.143485 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.172568 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.188766 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:55:04 crc kubenswrapper[4937]: E0123 06:55:04.189349 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f0cf1c-6bdf-4c85-90ed-b49e620682af" containerName="watcher-api" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.189366 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f0cf1c-6bdf-4c85-90ed-b49e620682af" containerName="watcher-api" Jan 23 06:55:04 crc kubenswrapper[4937]: E0123 06:55:04.189388 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f0cf1c-6bdf-4c85-90ed-b49e620682af" containerName="watcher-api-log" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.189395 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f0cf1c-6bdf-4c85-90ed-b49e620682af" containerName="watcher-api-log" Jan 23 06:55:04 crc kubenswrapper[4937]: E0123 06:55:04.189422 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90051c59-196e-4c8f-8ea6-31c1c4ce8cc0" containerName="init" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.189430 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="90051c59-196e-4c8f-8ea6-31c1c4ce8cc0" containerName="init" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.189636 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="90051c59-196e-4c8f-8ea6-31c1c4ce8cc0" containerName="init" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.189654 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="39f0cf1c-6bdf-4c85-90ed-b49e620682af" containerName="watcher-api" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.189665 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="39f0cf1c-6bdf-4c85-90ed-b49e620682af" containerName="watcher-api-log" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.191032 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.196715 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.213904 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.312969 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " pod="openstack/watcher-api-0" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.313069 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gff66\" (UniqueName: \"kubernetes.io/projected/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-kube-api-access-gff66\") pod \"watcher-api-0\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " pod="openstack/watcher-api-0" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.313183 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-logs\") pod \"watcher-api-0\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " pod="openstack/watcher-api-0" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.313230 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-config-data\") pod \"watcher-api-0\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " pod="openstack/watcher-api-0" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.313278 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " pod="openstack/watcher-api-0" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.414568 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-logs\") pod \"watcher-api-0\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " pod="openstack/watcher-api-0" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.414992 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-config-data\") pod \"watcher-api-0\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " pod="openstack/watcher-api-0" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.415039 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-logs\") pod \"watcher-api-0\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " pod="openstack/watcher-api-0" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.415049 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " pod="openstack/watcher-api-0" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.415430 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " pod="openstack/watcher-api-0" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.415645 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gff66\" (UniqueName: \"kubernetes.io/projected/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-kube-api-access-gff66\") pod \"watcher-api-0\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " pod="openstack/watcher-api-0" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.422470 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " pod="openstack/watcher-api-0" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.422558 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " pod="openstack/watcher-api-0" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.434705 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-config-data\") pod \"watcher-api-0\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " pod="openstack/watcher-api-0" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.441402 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gff66\" (UniqueName: \"kubernetes.io/projected/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-kube-api-access-gff66\") pod \"watcher-api-0\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " pod="openstack/watcher-api-0" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.521056 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.535015 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39f0cf1c-6bdf-4c85-90ed-b49e620682af" path="/var/lib/kubelet/pods/39f0cf1c-6bdf-4c85-90ed-b49e620682af/volumes" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.535695 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d244d9a-0e45-4115-a96a-fa7e2885c6f4" path="/var/lib/kubelet/pods/5d244d9a-0e45-4115-a96a-fa7e2885c6f4/volumes" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.797065 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8676986cc8-dkgvq" event={"ID":"d71c9df1-567e-4dd0-be98-fb63b23ebca7","Type":"ContainerStarted","Data":"a2997b909fc6a32649ea18b015b3ecceb30f95e9d1a41b569e00f65cdb24aeb4"} Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.799418 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dwbr9" event={"ID":"2caeb6b9-b501-4ac8-9f28-e3b5a40049cc","Type":"ContainerStarted","Data":"e06a533c0042274339700dc5a47431b2f7b27a413ba1ccbe65d7aac77de23625"} Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.799485 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dwbr9" event={"ID":"2caeb6b9-b501-4ac8-9f28-e3b5a40049cc","Type":"ContainerStarted","Data":"b479ab270ca2beef337edfff935dd82084a734c7f2add6d74e6d7f21768f70a9"} Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.805030 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6664758949-2wffl" event={"ID":"93868e10-2b6e-4517-a01e-885cdd05fbd8","Type":"ContainerStarted","Data":"147dfd28451f727d50c1eb185c2a96cede824aae3c9988137195377e6d805f53"} Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.806037 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.807289 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-679cdd6dd9-c2c84" event={"ID":"ed40d957-d6c3-48f4-9255-576482c38569","Type":"ContainerStarted","Data":"bd72090f63cfa4c3e04402867dffc9c239355d63d51944ad21dc9ba75196a446"} Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.809687 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-srnpk" event={"ID":"deb165f5-08a0-404a-88b3-f89dc3195c28","Type":"ContainerStarted","Data":"638c4f31a5c7354cb1c84a5c4b249b40b38566925f097f3356229dd4081fa8bf"} Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.809750 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-srnpk" event={"ID":"deb165f5-08a0-404a-88b3-f89dc3195c28","Type":"ContainerStarted","Data":"a9eb20492f27c8293d276d83e82b006155dabdecebd3010976ef804408c78c14"} Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.812783 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57b9fd85d8-54qml" event={"ID":"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2","Type":"ContainerStarted","Data":"f0eb0bc9824b085bc0f4d536ac6027d200433d7c3a19a85c428983c1f85fa95e"} Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.861378 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-srnpk" podStartSLOduration=35.861303002 podStartE2EDuration="35.861303002s" podCreationTimestamp="2026-01-23 06:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:55:04.855564736 +0000 UTC m=+1304.659331429" watchObservedRunningTime="2026-01-23 06:55:04.861303002 +0000 UTC m=+1304.665069665" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.868133 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-dwbr9" podStartSLOduration=46.868109996 podStartE2EDuration="46.868109996s" podCreationTimestamp="2026-01-23 06:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:55:04.830065795 +0000 UTC m=+1304.633832458" watchObservedRunningTime="2026-01-23 06:55:04.868109996 +0000 UTC m=+1304.671876659" Jan 23 06:55:04 crc kubenswrapper[4937]: I0123 06:55:04.889931 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6664758949-2wffl" podStartSLOduration=49.889900016 podStartE2EDuration="49.889900016s" podCreationTimestamp="2026-01-23 06:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:55:04.883139963 +0000 UTC m=+1304.686906626" watchObservedRunningTime="2026-01-23 06:55:04.889900016 +0000 UTC m=+1304.693666679" Jan 23 06:55:05 crc kubenswrapper[4937]: I0123 06:55:05.826355 4937 generic.go:334] "Generic (PLEG): container finished" podID="2caeb6b9-b501-4ac8-9f28-e3b5a40049cc" containerID="e06a533c0042274339700dc5a47431b2f7b27a413ba1ccbe65d7aac77de23625" exitCode=0 Jan 23 06:55:05 crc kubenswrapper[4937]: I0123 06:55:05.826462 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dwbr9" event={"ID":"2caeb6b9-b501-4ac8-9f28-e3b5a40049cc","Type":"ContainerDied","Data":"e06a533c0042274339700dc5a47431b2f7b27a413ba1ccbe65d7aac77de23625"} Jan 23 06:55:07 crc kubenswrapper[4937]: I0123 06:55:07.724552 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:55:07 crc kubenswrapper[4937]: I0123 06:55:07.725369 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:55:07 crc kubenswrapper[4937]: I0123 06:55:07.725449 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:55:07 crc kubenswrapper[4937]: I0123 06:55:07.731530 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2cafd2f65b886b18799192369211303c6e133b845292356bb63a572ac676cc20"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 06:55:07 crc kubenswrapper[4937]: I0123 06:55:07.731690 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://2cafd2f65b886b18799192369211303c6e133b845292356bb63a572ac676cc20" gracePeriod=600 Jan 23 06:55:08 crc kubenswrapper[4937]: I0123 06:55:08.421188 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dwbr9" Jan 23 06:55:08 crc kubenswrapper[4937]: I0123 06:55:08.504754 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:55:08 crc kubenswrapper[4937]: I0123 06:55:08.505133 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49b94\" (UniqueName: \"kubernetes.io/projected/2caeb6b9-b501-4ac8-9f28-e3b5a40049cc-kube-api-access-49b94\") pod \"2caeb6b9-b501-4ac8-9f28-e3b5a40049cc\" (UID: \"2caeb6b9-b501-4ac8-9f28-e3b5a40049cc\") " Jan 23 06:55:08 crc kubenswrapper[4937]: I0123 06:55:08.505186 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2caeb6b9-b501-4ac8-9f28-e3b5a40049cc-operator-scripts\") pod \"2caeb6b9-b501-4ac8-9f28-e3b5a40049cc\" (UID: \"2caeb6b9-b501-4ac8-9f28-e3b5a40049cc\") " Jan 23 06:55:08 crc kubenswrapper[4937]: I0123 06:55:08.506160 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2caeb6b9-b501-4ac8-9f28-e3b5a40049cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2caeb6b9-b501-4ac8-9f28-e3b5a40049cc" (UID: "2caeb6b9-b501-4ac8-9f28-e3b5a40049cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:08 crc kubenswrapper[4937]: I0123 06:55:08.509397 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2caeb6b9-b501-4ac8-9f28-e3b5a40049cc-kube-api-access-49b94" (OuterVolumeSpecName: "kube-api-access-49b94") pod "2caeb6b9-b501-4ac8-9f28-e3b5a40049cc" (UID: "2caeb6b9-b501-4ac8-9f28-e3b5a40049cc"). InnerVolumeSpecName "kube-api-access-49b94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:08 crc kubenswrapper[4937]: I0123 06:55:08.608920 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49b94\" (UniqueName: \"kubernetes.io/projected/2caeb6b9-b501-4ac8-9f28-e3b5a40049cc-kube-api-access-49b94\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:08 crc kubenswrapper[4937]: I0123 06:55:08.608980 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2caeb6b9-b501-4ac8-9f28-e3b5a40049cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:08 crc kubenswrapper[4937]: I0123 06:55:08.866138 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dwbr9" event={"ID":"2caeb6b9-b501-4ac8-9f28-e3b5a40049cc","Type":"ContainerDied","Data":"b479ab270ca2beef337edfff935dd82084a734c7f2add6d74e6d7f21768f70a9"} Jan 23 06:55:08 crc kubenswrapper[4937]: I0123 06:55:08.866179 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b479ab270ca2beef337edfff935dd82084a734c7f2add6d74e6d7f21768f70a9" Jan 23 06:55:08 crc kubenswrapper[4937]: I0123 06:55:08.866255 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dwbr9" Jan 23 06:55:08 crc kubenswrapper[4937]: I0123 06:55:08.868469 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="2cafd2f65b886b18799192369211303c6e133b845292356bb63a572ac676cc20" exitCode=0 Jan 23 06:55:08 crc kubenswrapper[4937]: I0123 06:55:08.868505 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"2cafd2f65b886b18799192369211303c6e133b845292356bb63a572ac676cc20"} Jan 23 06:55:08 crc kubenswrapper[4937]: I0123 06:55:08.868531 4937 scope.go:117] "RemoveContainer" containerID="43224d24fa6475b6c1d236391f3764ff43f9af8ed79931ba5cc6fab271c5f9c7" Jan 23 06:55:08 crc kubenswrapper[4937]: W0123 06:55:08.984686 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91e31fb8_0b81_47c8_b12b_2c638d5b4e7e.slice/crio-c17994327fa80b6e9d90cee3ec361fd66fd87b25d77ffd7b977fa2e62cecfef3 WatchSource:0}: Error finding container c17994327fa80b6e9d90cee3ec361fd66fd87b25d77ffd7b977fa2e62cecfef3: Status 404 returned error can't find the container with id c17994327fa80b6e9d90cee3ec361fd66fd87b25d77ffd7b977fa2e62cecfef3 Jan 23 06:55:09 crc kubenswrapper[4937]: I0123 06:55:09.888016 4937 generic.go:334] "Generic (PLEG): container finished" podID="deb165f5-08a0-404a-88b3-f89dc3195c28" containerID="638c4f31a5c7354cb1c84a5c4b249b40b38566925f097f3356229dd4081fa8bf" exitCode=0 Jan 23 06:55:09 crc kubenswrapper[4937]: I0123 06:55:09.888629 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-srnpk" event={"ID":"deb165f5-08a0-404a-88b3-f89dc3195c28","Type":"ContainerDied","Data":"638c4f31a5c7354cb1c84a5c4b249b40b38566925f097f3356229dd4081fa8bf"} Jan 23 06:55:09 crc kubenswrapper[4937]: I0123 06:55:09.897166 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-679cdd6dd9-c2c84" event={"ID":"ed40d957-d6c3-48f4-9255-576482c38569","Type":"ContainerStarted","Data":"27d91b3555818bd81e643566c8a240ed3bcd084c150197ae71cf98d20293c7d6"} Jan 23 06:55:09 crc kubenswrapper[4937]: I0123 06:55:09.897215 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-679cdd6dd9-c2c84" event={"ID":"ed40d957-d6c3-48f4-9255-576482c38569","Type":"ContainerStarted","Data":"3329a23cd43a4e10e8d5b714f5397f73f747d24c21e798071af624184c0f400c"} Jan 23 06:55:09 crc kubenswrapper[4937]: I0123 06:55:09.907453 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gbhl4" event={"ID":"71dcc2c8-b02c-4203-9fa8-1af6e12615d4","Type":"ContainerStarted","Data":"10c44657af758b0048b7e631a0b8133f00322859c0a18f99c4a5fa8bcd7c1f3f"} Jan 23 06:55:09 crc kubenswrapper[4937]: I0123 06:55:09.919425 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"a10facfc39aee45969b033e840bb8f688c490ef6b6d364e9ea5f238aa72a3af0"} Jan 23 06:55:09 crc kubenswrapper[4937]: I0123 06:55:09.931701 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e","Type":"ContainerStarted","Data":"8bd8683844cff8bb5bdd82c88faa4f015dfca3f93ee3c9de03e25d1efaa442e7"} Jan 23 06:55:09 crc kubenswrapper[4937]: I0123 06:55:09.931756 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e","Type":"ContainerStarted","Data":"5173248c8e459dc7c8fc25c1551694023b43677947f08bc3b450d1c9bb96351a"} Jan 23 06:55:09 crc kubenswrapper[4937]: I0123 06:55:09.931773 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e","Type":"ContainerStarted","Data":"c17994327fa80b6e9d90cee3ec361fd66fd87b25d77ffd7b977fa2e62cecfef3"} Jan 23 06:55:09 crc kubenswrapper[4937]: I0123 06:55:09.936914 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-gbhl4" podStartSLOduration=3.865098408 podStartE2EDuration="54.936892289s" podCreationTimestamp="2026-01-23 06:54:15 +0000 UTC" firstStartedPulling="2026-01-23 06:54:16.757654459 +0000 UTC m=+1256.561421102" lastFinishedPulling="2026-01-23 06:55:07.82944833 +0000 UTC m=+1307.633214983" observedRunningTime="2026-01-23 06:55:09.931065101 +0000 UTC m=+1309.734831764" watchObservedRunningTime="2026-01-23 06:55:09.936892289 +0000 UTC m=+1309.740658952" Jan 23 06:55:09 crc kubenswrapper[4937]: I0123 06:55:09.942222 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"51ed91bb-3033-4e77-8378-6cbdb382dc98","Type":"ContainerStarted","Data":"b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01"} Jan 23 06:55:09 crc kubenswrapper[4937]: I0123 06:55:09.958237 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57b9fd85d8-54qml" event={"ID":"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2","Type":"ContainerStarted","Data":"52c94bc76239dca95d463848bb005433dec274949ab4e452da654d60dd7d453e"} Jan 23 06:55:09 crc kubenswrapper[4937]: I0123 06:55:09.958346 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57b9fd85d8-54qml" event={"ID":"6ac78d23-24ea-411c-ba2e-714e3f3fb5d2","Type":"ContainerStarted","Data":"00e0f57f419bc27cccf5a09f63330597ab02dedd0f6d7531376fcc733305e5c4"} Jan 23 06:55:09 crc kubenswrapper[4937]: I0123 06:55:09.960348 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8676986cc8-dkgvq" event={"ID":"d71c9df1-567e-4dd0-be98-fb63b23ebca7","Type":"ContainerStarted","Data":"f8615ffc4bfe0609ab386f7941cbb3d5f1c50fe34fdfba0598dda6d28e799390"} Jan 23 06:55:09 crc kubenswrapper[4937]: I0123 06:55:09.962004 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d82e186e-8995-43ff-a65f-a1918f8495bf","Type":"ContainerStarted","Data":"06a4d7e08f84c2ae242fe9e1f6c6c8470b0636295885dbd9e74b594ba5dda612"} Jan 23 06:55:09 crc kubenswrapper[4937]: I0123 06:55:09.963099 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"52ed7619-5046-4b7c-b8a1-458d6831b92b","Type":"ContainerStarted","Data":"d6318875d0b02d948a933768fa5d67dd10295c9c7ba34dc8c26ca38530c2f4a6"} Jan 23 06:55:10 crc kubenswrapper[4937]: I0123 06:55:10.001358 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=4.222368391 podStartE2EDuration="56.001341655s" podCreationTimestamp="2026-01-23 06:54:14 +0000 UTC" firstStartedPulling="2026-01-23 06:54:16.485244652 +0000 UTC m=+1256.289011305" lastFinishedPulling="2026-01-23 06:55:08.264217876 +0000 UTC m=+1308.067984569" observedRunningTime="2026-01-23 06:55:09.980912541 +0000 UTC m=+1309.784679194" watchObservedRunningTime="2026-01-23 06:55:10.001341655 +0000 UTC m=+1309.805108308" Jan 23 06:55:10 crc kubenswrapper[4937]: I0123 06:55:10.003553 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=4.167763392 podStartE2EDuration="56.003532184s" podCreationTimestamp="2026-01-23 06:54:14 +0000 UTC" firstStartedPulling="2026-01-23 06:54:16.485020736 +0000 UTC m=+1256.288787389" lastFinishedPulling="2026-01-23 06:55:08.320789518 +0000 UTC m=+1308.124556181" observedRunningTime="2026-01-23 06:55:09.999427132 +0000 UTC m=+1309.803193815" watchObservedRunningTime="2026-01-23 06:55:10.003532184 +0000 UTC m=+1309.807298837" Jan 23 06:55:10 crc kubenswrapper[4937]: I0123 06:55:10.108042 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 23 06:55:10 crc kubenswrapper[4937]: I0123 06:55:10.875377 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:55:10 crc kubenswrapper[4937]: I0123 06:55:10.965670 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79db9b68b9-j4qgx"] Jan 23 06:55:10 crc kubenswrapper[4937]: I0123 06:55:10.965910 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" podUID="73fc9957-a295-493b-8c78-3cb99b56584c" containerName="dnsmasq-dns" containerID="cri-o://5c3e4496a879d31751cfdbc31e1cc1167d509dc1097270f4f5cc1e35065a5437" gracePeriod=10 Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.015671 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8676986cc8-dkgvq" event={"ID":"d71c9df1-567e-4dd0-be98-fb63b23ebca7","Type":"ContainerStarted","Data":"621835b9c002353487666efebf8df80797fcdb73c9de4fbd3288c923e4dcede6"} Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.016327 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-679cdd6dd9-c2c84" podUID="ed40d957-d6c3-48f4-9255-576482c38569" containerName="horizon-log" containerID="cri-o://3329a23cd43a4e10e8d5b714f5397f73f747d24c21e798071af624184c0f400c" gracePeriod=30 Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.016747 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-679cdd6dd9-c2c84" podUID="ed40d957-d6c3-48f4-9255-576482c38569" containerName="horizon" containerID="cri-o://27d91b3555818bd81e643566c8a240ed3bcd084c150197ae71cf98d20293c7d6" gracePeriod=30 Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.092495 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-8676986cc8-dkgvq" podStartSLOduration=43.721589286 podStartE2EDuration="48.092473417s" podCreationTimestamp="2026-01-23 06:54:23 +0000 UTC" firstStartedPulling="2026-01-23 06:55:03.895945236 +0000 UTC m=+1303.699711889" lastFinishedPulling="2026-01-23 06:55:08.266829327 +0000 UTC m=+1308.070596020" observedRunningTime="2026-01-23 06:55:11.074782927 +0000 UTC m=+1310.878549580" watchObservedRunningTime="2026-01-23 06:55:11.092473417 +0000 UTC m=+1310.896240070" Jan 23 06:55:11 crc kubenswrapper[4937]: E0123 06:55:11.100353 4937 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73fc9957_a295_493b_8c78_3cb99b56584c.slice/crio-5c3e4496a879d31751cfdbc31e1cc1167d509dc1097270f4f5cc1e35065a5437.scope\": RecentStats: unable to find data in memory cache]" Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.151148 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-679cdd6dd9-c2c84" podStartSLOduration=48.778317213 podStartE2EDuration="53.151130536s" podCreationTimestamp="2026-01-23 06:54:18 +0000 UTC" firstStartedPulling="2026-01-23 06:55:03.893315635 +0000 UTC m=+1303.697082288" lastFinishedPulling="2026-01-23 06:55:08.266128938 +0000 UTC m=+1308.069895611" observedRunningTime="2026-01-23 06:55:11.148011921 +0000 UTC m=+1310.951778574" watchObservedRunningTime="2026-01-23 06:55:11.151130536 +0000 UTC m=+1310.954897189" Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.656187 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.785513 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-combined-ca-bundle\") pod \"deb165f5-08a0-404a-88b3-f89dc3195c28\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.785643 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-scripts\") pod \"deb165f5-08a0-404a-88b3-f89dc3195c28\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.785672 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-fernet-keys\") pod \"deb165f5-08a0-404a-88b3-f89dc3195c28\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.785703 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-config-data\") pod \"deb165f5-08a0-404a-88b3-f89dc3195c28\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.785793 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn2kl\" (UniqueName: \"kubernetes.io/projected/deb165f5-08a0-404a-88b3-f89dc3195c28-kube-api-access-pn2kl\") pod \"deb165f5-08a0-404a-88b3-f89dc3195c28\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.785860 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-credential-keys\") pod \"deb165f5-08a0-404a-88b3-f89dc3195c28\" (UID: \"deb165f5-08a0-404a-88b3-f89dc3195c28\") " Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.793893 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "deb165f5-08a0-404a-88b3-f89dc3195c28" (UID: "deb165f5-08a0-404a-88b3-f89dc3195c28"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.795415 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deb165f5-08a0-404a-88b3-f89dc3195c28-kube-api-access-pn2kl" (OuterVolumeSpecName: "kube-api-access-pn2kl") pod "deb165f5-08a0-404a-88b3-f89dc3195c28" (UID: "deb165f5-08a0-404a-88b3-f89dc3195c28"). InnerVolumeSpecName "kube-api-access-pn2kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.798744 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "deb165f5-08a0-404a-88b3-f89dc3195c28" (UID: "deb165f5-08a0-404a-88b3-f89dc3195c28"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.809214 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-scripts" (OuterVolumeSpecName: "scripts") pod "deb165f5-08a0-404a-88b3-f89dc3195c28" (UID: "deb165f5-08a0-404a-88b3-f89dc3195c28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.816727 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-config-data" (OuterVolumeSpecName: "config-data") pod "deb165f5-08a0-404a-88b3-f89dc3195c28" (UID: "deb165f5-08a0-404a-88b3-f89dc3195c28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.823418 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "deb165f5-08a0-404a-88b3-f89dc3195c28" (UID: "deb165f5-08a0-404a-88b3-f89dc3195c28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.888519 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.888554 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.888563 4937 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.888571 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.889280 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn2kl\" (UniqueName: \"kubernetes.io/projected/deb165f5-08a0-404a-88b3-f89dc3195c28-kube-api-access-pn2kl\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:11 crc kubenswrapper[4937]: I0123 06:55:11.889383 4937 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/deb165f5-08a0-404a-88b3-f89dc3195c28-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:12 crc kubenswrapper[4937]: I0123 06:55:12.024719 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-srnpk" event={"ID":"deb165f5-08a0-404a-88b3-f89dc3195c28","Type":"ContainerDied","Data":"a9eb20492f27c8293d276d83e82b006155dabdecebd3010976ef804408c78c14"} Jan 23 06:55:12 crc kubenswrapper[4937]: I0123 06:55:12.025561 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9eb20492f27c8293d276d83e82b006155dabdecebd3010976ef804408c78c14" Jan 23 06:55:12 crc kubenswrapper[4937]: I0123 06:55:12.025677 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 23 06:55:12 crc kubenswrapper[4937]: I0123 06:55:12.024812 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-srnpk" Jan 23 06:55:12 crc kubenswrapper[4937]: I0123 06:55:12.048794 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-57b9fd85d8-54qml" podStartSLOduration=44.608126977 podStartE2EDuration="49.048773287s" podCreationTimestamp="2026-01-23 06:54:23 +0000 UTC" firstStartedPulling="2026-01-23 06:55:03.880028475 +0000 UTC m=+1303.683795128" lastFinishedPulling="2026-01-23 06:55:08.320674785 +0000 UTC m=+1308.124441438" observedRunningTime="2026-01-23 06:55:12.042151308 +0000 UTC m=+1311.845917961" watchObservedRunningTime="2026-01-23 06:55:12.048773287 +0000 UTC m=+1311.852539940" Jan 23 06:55:12 crc kubenswrapper[4937]: I0123 06:55:12.071940 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=8.071921154 podStartE2EDuration="8.071921154s" podCreationTimestamp="2026-01-23 06:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:55:12.068229194 +0000 UTC m=+1311.871995847" watchObservedRunningTime="2026-01-23 06:55:12.071921154 +0000 UTC m=+1311.875687817" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.120848 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6ffdfcccc5-xjn5f"] Jan 23 06:55:13 crc kubenswrapper[4937]: E0123 06:55:13.121203 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2caeb6b9-b501-4ac8-9f28-e3b5a40049cc" containerName="mariadb-account-create-update" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.121215 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="2caeb6b9-b501-4ac8-9f28-e3b5a40049cc" containerName="mariadb-account-create-update" Jan 23 06:55:13 crc kubenswrapper[4937]: E0123 06:55:13.121231 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb165f5-08a0-404a-88b3-f89dc3195c28" containerName="keystone-bootstrap" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.121237 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb165f5-08a0-404a-88b3-f89dc3195c28" containerName="keystone-bootstrap" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.121421 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb165f5-08a0-404a-88b3-f89dc3195c28" containerName="keystone-bootstrap" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.121447 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="2caeb6b9-b501-4ac8-9f28-e3b5a40049cc" containerName="mariadb-account-create-update" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.122107 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.123604 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mg5cr" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.123833 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.124183 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.125415 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.125758 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.127748 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.134363 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6ffdfcccc5-xjn5f"] Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.313083 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-internal-tls-certs\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.313185 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-fernet-keys\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.313269 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-credential-keys\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.313297 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klq9j\" (UniqueName: \"kubernetes.io/projected/41d5735b-6774-456d-b664-15aafa43fac0-kube-api-access-klq9j\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.313387 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-config-data\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.313474 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-scripts\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.313560 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-combined-ca-bundle\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.313608 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-public-tls-certs\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.415571 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-credential-keys\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.415664 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klq9j\" (UniqueName: \"kubernetes.io/projected/41d5735b-6774-456d-b664-15aafa43fac0-kube-api-access-klq9j\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.415747 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-config-data\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.415786 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-scripts\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.415853 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-combined-ca-bundle\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.415883 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-public-tls-certs\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.415958 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-internal-tls-certs\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.415990 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-fernet-keys\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.421107 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-credential-keys\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.421152 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-public-tls-certs\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.421640 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-config-data\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.421793 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-combined-ca-bundle\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.421820 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-internal-tls-certs\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.423011 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-scripts\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.423703 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/41d5735b-6774-456d-b664-15aafa43fac0-fernet-keys\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.446772 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klq9j\" (UniqueName: \"kubernetes.io/projected/41d5735b-6774-456d-b664-15aafa43fac0-kube-api-access-klq9j\") pod \"keystone-6ffdfcccc5-xjn5f\" (UID: \"41d5735b-6774-456d-b664-15aafa43fac0\") " pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:13 crc kubenswrapper[4937]: I0123 06:55:13.740716 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:14 crc kubenswrapper[4937]: I0123 06:55:14.082414 4937 generic.go:334] "Generic (PLEG): container finished" podID="73fc9957-a295-493b-8c78-3cb99b56584c" containerID="5c3e4496a879d31751cfdbc31e1cc1167d509dc1097270f4f5cc1e35065a5437" exitCode=0 Jan 23 06:55:14 crc kubenswrapper[4937]: I0123 06:55:14.083413 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" event={"ID":"73fc9957-a295-493b-8c78-3cb99b56584c","Type":"ContainerDied","Data":"5c3e4496a879d31751cfdbc31e1cc1167d509dc1097270f4f5cc1e35065a5437"} Jan 23 06:55:14 crc kubenswrapper[4937]: I0123 06:55:14.225776 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:55:14 crc kubenswrapper[4937]: I0123 06:55:14.226114 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:55:14 crc kubenswrapper[4937]: I0123 06:55:14.300422 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:55:14 crc kubenswrapper[4937]: I0123 06:55:14.301474 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:55:14 crc kubenswrapper[4937]: I0123 06:55:14.364313 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6ffdfcccc5-xjn5f"] Jan 23 06:55:14 crc kubenswrapper[4937]: I0123 06:55:14.521610 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 23 06:55:14 crc kubenswrapper[4937]: I0123 06:55:14.521668 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 23 06:55:14 crc kubenswrapper[4937]: I0123 06:55:14.521804 4937 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:55:14 crc kubenswrapper[4937]: I0123 06:55:14.926017 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 23 06:55:15 crc kubenswrapper[4937]: I0123 06:55:15.081041 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 23 06:55:15 crc kubenswrapper[4937]: I0123 06:55:15.081084 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 06:55:15 crc kubenswrapper[4937]: I0123 06:55:15.092974 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6ffdfcccc5-xjn5f" event={"ID":"41d5735b-6774-456d-b664-15aafa43fac0","Type":"ContainerStarted","Data":"8c4206a720d5d675b0a651eee3710e5400fe897fb09d68a2b017f737a05fb401"} Jan 23 06:55:15 crc kubenswrapper[4937]: I0123 06:55:15.107950 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 23 06:55:15 crc kubenswrapper[4937]: I0123 06:55:15.111023 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 23 06:55:15 crc kubenswrapper[4937]: I0123 06:55:15.145881 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" podUID="73fc9957-a295-493b-8c78-3cb99b56584c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: connect: connection refused" Jan 23 06:55:15 crc kubenswrapper[4937]: I0123 06:55:15.163056 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 23 06:55:15 crc kubenswrapper[4937]: I0123 06:55:15.563798 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/watcher-api-0" podUID="91e31fb8-0b81-47c8-b12b-2c638d5b4e7e" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.169:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.000834 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.073224 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-config\") pod \"73fc9957-a295-493b-8c78-3cb99b56584c\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.073367 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-ovsdbserver-nb\") pod \"73fc9957-a295-493b-8c78-3cb99b56584c\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.073430 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-ovsdbserver-sb\") pod \"73fc9957-a295-493b-8c78-3cb99b56584c\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.073471 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn5p7\" (UniqueName: \"kubernetes.io/projected/73fc9957-a295-493b-8c78-3cb99b56584c-kube-api-access-qn5p7\") pod \"73fc9957-a295-493b-8c78-3cb99b56584c\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.073509 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-dns-svc\") pod \"73fc9957-a295-493b-8c78-3cb99b56584c\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.073556 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-dns-swift-storage-0\") pod \"73fc9957-a295-493b-8c78-3cb99b56584c\" (UID: \"73fc9957-a295-493b-8c78-3cb99b56584c\") " Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.095853 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73fc9957-a295-493b-8c78-3cb99b56584c-kube-api-access-qn5p7" (OuterVolumeSpecName: "kube-api-access-qn5p7") pod "73fc9957-a295-493b-8c78-3cb99b56584c" (UID: "73fc9957-a295-493b-8c78-3cb99b56584c"). InnerVolumeSpecName "kube-api-access-qn5p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.159138 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6ffdfcccc5-xjn5f" event={"ID":"41d5735b-6774-456d-b664-15aafa43fac0","Type":"ContainerStarted","Data":"5ba94ebda6aa1174f2277a4328bfdcd8a434d6e0f565f3ba522d28069db9f4a6"} Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.160350 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.166480 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "73fc9957-a295-493b-8c78-3cb99b56584c" (UID: "73fc9957-a295-493b-8c78-3cb99b56584c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.175817 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" event={"ID":"73fc9957-a295-493b-8c78-3cb99b56584c","Type":"ContainerDied","Data":"07790255d7affadbd5bc58eaa6484e37b5ebc83d4bb0f9fbd08c3fe79fbba08c"} Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.175873 4937 scope.go:117] "RemoveContainer" containerID="5c3e4496a879d31751cfdbc31e1cc1167d509dc1097270f4f5cc1e35065a5437" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.176059 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79db9b68b9-j4qgx" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.176959 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "73fc9957-a295-493b-8c78-3cb99b56584c" (UID: "73fc9957-a295-493b-8c78-3cb99b56584c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.192164 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6ffdfcccc5-xjn5f" podStartSLOduration=3.192142176 podStartE2EDuration="3.192142176s" podCreationTimestamp="2026-01-23 06:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:55:16.182170036 +0000 UTC m=+1315.985936689" watchObservedRunningTime="2026-01-23 06:55:16.192142176 +0000 UTC m=+1315.995908829" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.205296 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn5p7\" (UniqueName: \"kubernetes.io/projected/73fc9957-a295-493b-8c78-3cb99b56584c-kube-api-access-qn5p7\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.205321 4937 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.205331 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.230794 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.233233 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-config" (OuterVolumeSpecName: "config") pod "73fc9957-a295-493b-8c78-3cb99b56584c" (UID: "73fc9957-a295-493b-8c78-3cb99b56584c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.235204 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.241925 4937 scope.go:117] "RemoveContainer" containerID="c3141d1d4847ab115c7dccf6a22b8bd94da943d6d7f66e024bec42daddc13938" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.250297 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "73fc9957-a295-493b-8c78-3cb99b56584c" (UID: "73fc9957-a295-493b-8c78-3cb99b56584c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.251111 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "73fc9957-a295-493b-8c78-3cb99b56584c" (UID: "73fc9957-a295-493b-8c78-3cb99b56584c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.265544 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.306739 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.306771 4937 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.306780 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73fc9957-a295-493b-8c78-3cb99b56584c-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.316979 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.569463 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79db9b68b9-j4qgx"] Jan 23 06:55:16 crc kubenswrapper[4937]: I0123 06:55:16.569736 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79db9b68b9-j4qgx"] Jan 23 06:55:18 crc kubenswrapper[4937]: I0123 06:55:18.193238 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="52ed7619-5046-4b7c-b8a1-458d6831b92b" containerName="watcher-decision-engine" containerID="cri-o://d6318875d0b02d948a933768fa5d67dd10295c9c7ba34dc8c26ca38530c2f4a6" gracePeriod=30 Jan 23 06:55:18 crc kubenswrapper[4937]: I0123 06:55:18.193364 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="51ed91bb-3033-4e77-8378-6cbdb382dc98" containerName="watcher-applier" containerID="cri-o://b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" gracePeriod=30 Jan 23 06:55:18 crc kubenswrapper[4937]: I0123 06:55:18.521861 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:55:18 crc kubenswrapper[4937]: I0123 06:55:18.539760 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73fc9957-a295-493b-8c78-3cb99b56584c" path="/var/lib/kubelet/pods/73fc9957-a295-493b-8c78-3cb99b56584c/volumes" Jan 23 06:55:20 crc kubenswrapper[4937]: E0123 06:55:20.112289 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 23 06:55:20 crc kubenswrapper[4937]: E0123 06:55:20.114678 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 23 06:55:20 crc kubenswrapper[4937]: E0123 06:55:20.116365 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 23 06:55:20 crc kubenswrapper[4937]: E0123 06:55:20.116418 4937 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="51ed91bb-3033-4e77-8378-6cbdb382dc98" containerName="watcher-applier" Jan 23 06:55:20 crc kubenswrapper[4937]: E0123 06:55:20.173631 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.44:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-2n4zb" podUID="e02394c2-2975-4589-9670-7c69fa89cb1d" Jan 23 06:55:24 crc kubenswrapper[4937]: I0123 06:55:24.232465 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8676986cc8-dkgvq" podUID="d71c9df1-567e-4dd0-be98-fb63b23ebca7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Jan 23 06:55:24 crc kubenswrapper[4937]: I0123 06:55:24.304402 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57b9fd85d8-54qml" podUID="6ac78d23-24ea-411c-ba2e-714e3f3fb5d2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.167:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.167:8443: connect: connection refused" Jan 23 06:55:24 crc kubenswrapper[4937]: I0123 06:55:24.539485 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 23 06:55:24 crc kubenswrapper[4937]: I0123 06:55:24.549946 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 23 06:55:25 crc kubenswrapper[4937]: E0123 06:55:25.109223 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 23 06:55:25 crc kubenswrapper[4937]: E0123 06:55:25.110766 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 23 06:55:25 crc kubenswrapper[4937]: E0123 06:55:25.112142 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 23 06:55:25 crc kubenswrapper[4937]: E0123 06:55:25.112245 4937 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="51ed91bb-3033-4e77-8378-6cbdb382dc98" containerName="watcher-applier" Jan 23 06:55:26 crc kubenswrapper[4937]: I0123 06:55:26.306894 4937 generic.go:334] "Generic (PLEG): container finished" podID="71dcc2c8-b02c-4203-9fa8-1af6e12615d4" containerID="10c44657af758b0048b7e631a0b8133f00322859c0a18f99c4a5fa8bcd7c1f3f" exitCode=0 Jan 23 06:55:26 crc kubenswrapper[4937]: I0123 06:55:26.307227 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gbhl4" event={"ID":"71dcc2c8-b02c-4203-9fa8-1af6e12615d4","Type":"ContainerDied","Data":"10c44657af758b0048b7e631a0b8133f00322859c0a18f99c4a5fa8bcd7c1f3f"} Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.252393 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.316682 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zjhvz" event={"ID":"f77dc563-57f2-4c47-a627-98d15343173b","Type":"ContainerStarted","Data":"1f2ee790343d87ce9795030e705dbac913df0eea1e8b8d884c2493899ec3767f"} Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.319637 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d82e186e-8995-43ff-a65f-a1918f8495bf","Type":"ContainerStarted","Data":"409893e7fa7d1be8743a91425df85b01f8059c06ffee495a68d6b638e0c2fc3f"} Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.321400 4937 generic.go:334] "Generic (PLEG): container finished" podID="52ed7619-5046-4b7c-b8a1-458d6831b92b" containerID="d6318875d0b02d948a933768fa5d67dd10295c9c7ba34dc8c26ca38530c2f4a6" exitCode=1 Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.321445 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.321509 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"52ed7619-5046-4b7c-b8a1-458d6831b92b","Type":"ContainerDied","Data":"d6318875d0b02d948a933768fa5d67dd10295c9c7ba34dc8c26ca38530c2f4a6"} Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.321579 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"52ed7619-5046-4b7c-b8a1-458d6831b92b","Type":"ContainerDied","Data":"05c98d103508cbe7eed792bdd72ca7ca5935c4cea7b1ed942d3e78a4589baaa1"} Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.321616 4937 scope.go:117] "RemoveContainer" containerID="d6318875d0b02d948a933768fa5d67dd10295c9c7ba34dc8c26ca38530c2f4a6" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.323364 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gkrpw" event={"ID":"47488e25-41b9-46e1-8ad7-5cfe2c4654c7","Type":"ContainerStarted","Data":"1a63be59397fb453f40877ce49a3d597cdad81a46ed53136ff3711fa90499d36"} Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.333616 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-combined-ca-bundle\") pod \"52ed7619-5046-4b7c-b8a1-458d6831b92b\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.333738 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-custom-prometheus-ca\") pod \"52ed7619-5046-4b7c-b8a1-458d6831b92b\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.333842 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gfg5\" (UniqueName: \"kubernetes.io/projected/52ed7619-5046-4b7c-b8a1-458d6831b92b-kube-api-access-8gfg5\") pod \"52ed7619-5046-4b7c-b8a1-458d6831b92b\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.333893 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-config-data\") pod \"52ed7619-5046-4b7c-b8a1-458d6831b92b\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.334001 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ed7619-5046-4b7c-b8a1-458d6831b92b-logs\") pod \"52ed7619-5046-4b7c-b8a1-458d6831b92b\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.337865 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52ed7619-5046-4b7c-b8a1-458d6831b92b-logs" (OuterVolumeSpecName: "logs") pod "52ed7619-5046-4b7c-b8a1-458d6831b92b" (UID: "52ed7619-5046-4b7c-b8a1-458d6831b92b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.350897 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52ed7619-5046-4b7c-b8a1-458d6831b92b-kube-api-access-8gfg5" (OuterVolumeSpecName: "kube-api-access-8gfg5") pod "52ed7619-5046-4b7c-b8a1-458d6831b92b" (UID: "52ed7619-5046-4b7c-b8a1-458d6831b92b"). InnerVolumeSpecName "kube-api-access-8gfg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.363398 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-zjhvz" podStartSLOduration=3.799598709 podStartE2EDuration="1m13.363380459s" podCreationTimestamp="2026-01-23 06:54:14 +0000 UTC" firstStartedPulling="2026-01-23 06:54:16.501422019 +0000 UTC m=+1256.305188672" lastFinishedPulling="2026-01-23 06:55:26.065203769 +0000 UTC m=+1325.868970422" observedRunningTime="2026-01-23 06:55:27.33722325 +0000 UTC m=+1327.140989903" watchObservedRunningTime="2026-01-23 06:55:27.363380459 +0000 UTC m=+1327.167147112" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.364738 4937 scope.go:117] "RemoveContainer" containerID="d6318875d0b02d948a933768fa5d67dd10295c9c7ba34dc8c26ca38530c2f4a6" Jan 23 06:55:27 crc kubenswrapper[4937]: E0123 06:55:27.365915 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6318875d0b02d948a933768fa5d67dd10295c9c7ba34dc8c26ca38530c2f4a6\": container with ID starting with d6318875d0b02d948a933768fa5d67dd10295c9c7ba34dc8c26ca38530c2f4a6 not found: ID does not exist" containerID="d6318875d0b02d948a933768fa5d67dd10295c9c7ba34dc8c26ca38530c2f4a6" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.365945 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6318875d0b02d948a933768fa5d67dd10295c9c7ba34dc8c26ca38530c2f4a6"} err="failed to get container status \"d6318875d0b02d948a933768fa5d67dd10295c9c7ba34dc8c26ca38530c2f4a6\": rpc error: code = NotFound desc = could not find container \"d6318875d0b02d948a933768fa5d67dd10295c9c7ba34dc8c26ca38530c2f4a6\": container with ID starting with d6318875d0b02d948a933768fa5d67dd10295c9c7ba34dc8c26ca38530c2f4a6 not found: ID does not exist" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.382599 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52ed7619-5046-4b7c-b8a1-458d6831b92b" (UID: "52ed7619-5046-4b7c-b8a1-458d6831b92b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.402965 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-gkrpw" podStartSLOduration=3.82987425 podStartE2EDuration="1m13.40294477s" podCreationTimestamp="2026-01-23 06:54:14 +0000 UTC" firstStartedPulling="2026-01-23 06:54:16.493808924 +0000 UTC m=+1256.297575577" lastFinishedPulling="2026-01-23 06:55:26.066879444 +0000 UTC m=+1325.870646097" observedRunningTime="2026-01-23 06:55:27.39850734 +0000 UTC m=+1327.202273983" watchObservedRunningTime="2026-01-23 06:55:27.40294477 +0000 UTC m=+1327.206711423" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.431313 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "52ed7619-5046-4b7c-b8a1-458d6831b92b" (UID: "52ed7619-5046-4b7c-b8a1-458d6831b92b"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.434772 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-config-data" (OuterVolumeSpecName: "config-data") pod "52ed7619-5046-4b7c-b8a1-458d6831b92b" (UID: "52ed7619-5046-4b7c-b8a1-458d6831b92b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.435680 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-config-data\") pod \"52ed7619-5046-4b7c-b8a1-458d6831b92b\" (UID: \"52ed7619-5046-4b7c-b8a1-458d6831b92b\") " Jan 23 06:55:27 crc kubenswrapper[4937]: W0123 06:55:27.435831 4937 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/52ed7619-5046-4b7c-b8a1-458d6831b92b/volumes/kubernetes.io~secret/config-data Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.435862 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-config-data" (OuterVolumeSpecName: "config-data") pod "52ed7619-5046-4b7c-b8a1-458d6831b92b" (UID: "52ed7619-5046-4b7c-b8a1-458d6831b92b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.436138 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gfg5\" (UniqueName: \"kubernetes.io/projected/52ed7619-5046-4b7c-b8a1-458d6831b92b-kube-api-access-8gfg5\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.436161 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.436173 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52ed7619-5046-4b7c-b8a1-458d6831b92b-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.436185 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.436195 4937 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/52ed7619-5046-4b7c-b8a1-458d6831b92b-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.684032 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.695176 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.705855 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:55:27 crc kubenswrapper[4937]: E0123 06:55:27.706578 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ed7619-5046-4b7c-b8a1-458d6831b92b" containerName="watcher-decision-engine" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.707280 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ed7619-5046-4b7c-b8a1-458d6831b92b" containerName="watcher-decision-engine" Jan 23 06:55:27 crc kubenswrapper[4937]: E0123 06:55:27.707320 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73fc9957-a295-493b-8c78-3cb99b56584c" containerName="init" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.707327 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="73fc9957-a295-493b-8c78-3cb99b56584c" containerName="init" Jan 23 06:55:27 crc kubenswrapper[4937]: E0123 06:55:27.707351 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73fc9957-a295-493b-8c78-3cb99b56584c" containerName="dnsmasq-dns" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.707357 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="73fc9957-a295-493b-8c78-3cb99b56584c" containerName="dnsmasq-dns" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.707556 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ed7619-5046-4b7c-b8a1-458d6831b92b" containerName="watcher-decision-engine" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.707572 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="73fc9957-a295-493b-8c78-3cb99b56584c" containerName="dnsmasq-dns" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.708261 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.715528 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.717368 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gbhl4" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.722785 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.746773 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2tck\" (UniqueName: \"kubernetes.io/projected/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-kube-api-access-z2tck\") pod \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.747026 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-scripts\") pod \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.747080 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-logs\") pod \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.747166 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-config-data\") pod \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.747211 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-combined-ca-bundle\") pod \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\" (UID: \"71dcc2c8-b02c-4203-9fa8-1af6e12615d4\") " Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.747391 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/438bef39-6283-4b17-b551-74f127660dbd-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.747422 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438bef39-6283-4b17-b551-74f127660dbd-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.747473 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/438bef39-6283-4b17-b551-74f127660dbd-logs\") pod \"watcher-decision-engine-0\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.747505 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/438bef39-6283-4b17-b551-74f127660dbd-config-data\") pod \"watcher-decision-engine-0\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.747539 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljs26\" (UniqueName: \"kubernetes.io/projected/438bef39-6283-4b17-b551-74f127660dbd-kube-api-access-ljs26\") pod \"watcher-decision-engine-0\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.750015 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-logs" (OuterVolumeSpecName: "logs") pod "71dcc2c8-b02c-4203-9fa8-1af6e12615d4" (UID: "71dcc2c8-b02c-4203-9fa8-1af6e12615d4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.754778 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-scripts" (OuterVolumeSpecName: "scripts") pod "71dcc2c8-b02c-4203-9fa8-1af6e12615d4" (UID: "71dcc2c8-b02c-4203-9fa8-1af6e12615d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.759933 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-kube-api-access-z2tck" (OuterVolumeSpecName: "kube-api-access-z2tck") pod "71dcc2c8-b02c-4203-9fa8-1af6e12615d4" (UID: "71dcc2c8-b02c-4203-9fa8-1af6e12615d4"). InnerVolumeSpecName "kube-api-access-z2tck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.782253 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-config-data" (OuterVolumeSpecName: "config-data") pod "71dcc2c8-b02c-4203-9fa8-1af6e12615d4" (UID: "71dcc2c8-b02c-4203-9fa8-1af6e12615d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.788772 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71dcc2c8-b02c-4203-9fa8-1af6e12615d4" (UID: "71dcc2c8-b02c-4203-9fa8-1af6e12615d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.849492 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/438bef39-6283-4b17-b551-74f127660dbd-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.849559 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438bef39-6283-4b17-b551-74f127660dbd-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.849649 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/438bef39-6283-4b17-b551-74f127660dbd-logs\") pod \"watcher-decision-engine-0\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.849690 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/438bef39-6283-4b17-b551-74f127660dbd-config-data\") pod \"watcher-decision-engine-0\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.849720 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljs26\" (UniqueName: \"kubernetes.io/projected/438bef39-6283-4b17-b551-74f127660dbd-kube-api-access-ljs26\") pod \"watcher-decision-engine-0\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.849773 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2tck\" (UniqueName: \"kubernetes.io/projected/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-kube-api-access-z2tck\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.849785 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.849793 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.849801 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.849809 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71dcc2c8-b02c-4203-9fa8-1af6e12615d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.850486 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/438bef39-6283-4b17-b551-74f127660dbd-logs\") pod \"watcher-decision-engine-0\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.852816 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/438bef39-6283-4b17-b551-74f127660dbd-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.853370 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438bef39-6283-4b17-b551-74f127660dbd-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.854986 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/438bef39-6283-4b17-b551-74f127660dbd-config-data\") pod \"watcher-decision-engine-0\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.870773 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljs26\" (UniqueName: \"kubernetes.io/projected/438bef39-6283-4b17-b551-74f127660dbd-kube-api-access-ljs26\") pod \"watcher-decision-engine-0\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.992544 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.992892 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="91e31fb8-0b81-47c8-b12b-2c638d5b4e7e" containerName="watcher-api-log" containerID="cri-o://5173248c8e459dc7c8fc25c1551694023b43677947f08bc3b450d1c9bb96351a" gracePeriod=30 Jan 23 06:55:27 crc kubenswrapper[4937]: I0123 06:55:27.992970 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="91e31fb8-0b81-47c8-b12b-2c638d5b4e7e" containerName="watcher-api" containerID="cri-o://8bd8683844cff8bb5bdd82c88faa4f015dfca3f93ee3c9de03e25d1efaa442e7" gracePeriod=30 Jan 23 06:55:28 crc kubenswrapper[4937]: I0123 06:55:28.052106 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 06:55:28 crc kubenswrapper[4937]: I0123 06:55:28.337203 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gbhl4" event={"ID":"71dcc2c8-b02c-4203-9fa8-1af6e12615d4","Type":"ContainerDied","Data":"095323cd0591b346259b82b6faa1e1c06903578731bb92d66bd8fee3579a278b"} Jan 23 06:55:28 crc kubenswrapper[4937]: I0123 06:55:28.337474 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="095323cd0591b346259b82b6faa1e1c06903578731bb92d66bd8fee3579a278b" Jan 23 06:55:28 crc kubenswrapper[4937]: I0123 06:55:28.337555 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gbhl4" Jan 23 06:55:28 crc kubenswrapper[4937]: I0123 06:55:28.540180 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52ed7619-5046-4b7c-b8a1-458d6831b92b" path="/var/lib/kubelet/pods/52ed7619-5046-4b7c-b8a1-458d6831b92b/volumes" Jan 23 06:55:28 crc kubenswrapper[4937]: I0123 06:55:28.540694 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.345953 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"438bef39-6283-4b17-b551-74f127660dbd","Type":"ContainerStarted","Data":"322c78f8fc797238da34db8f9417ca80cdf76f02385c20b58b0e0036de213d4a"} Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.776825 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-74dfd7457b-nnk7x"] Jan 23 06:55:29 crc kubenswrapper[4937]: E0123 06:55:29.777603 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71dcc2c8-b02c-4203-9fa8-1af6e12615d4" containerName="placement-db-sync" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.777620 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="71dcc2c8-b02c-4203-9fa8-1af6e12615d4" containerName="placement-db-sync" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.777845 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="71dcc2c8-b02c-4203-9fa8-1af6e12615d4" containerName="placement-db-sync" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.778972 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.781822 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.782060 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.782228 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.782358 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-g8n99" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.782379 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.796347 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-74dfd7457b-nnk7x"] Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.890837 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nvqs\" (UniqueName: \"kubernetes.io/projected/dcf766f7-3478-4448-a61b-8eff850eae70-kube-api-access-8nvqs\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.891013 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf766f7-3478-4448-a61b-8eff850eae70-combined-ca-bundle\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.891063 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf766f7-3478-4448-a61b-8eff850eae70-scripts\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.891124 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf766f7-3478-4448-a61b-8eff850eae70-config-data\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.891171 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf766f7-3478-4448-a61b-8eff850eae70-internal-tls-certs\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.891208 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf766f7-3478-4448-a61b-8eff850eae70-public-tls-certs\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.891569 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcf766f7-3478-4448-a61b-8eff850eae70-logs\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.993046 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcf766f7-3478-4448-a61b-8eff850eae70-logs\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.993141 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nvqs\" (UniqueName: \"kubernetes.io/projected/dcf766f7-3478-4448-a61b-8eff850eae70-kube-api-access-8nvqs\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.993203 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf766f7-3478-4448-a61b-8eff850eae70-combined-ca-bundle\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.993250 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf766f7-3478-4448-a61b-8eff850eae70-scripts\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.993308 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf766f7-3478-4448-a61b-8eff850eae70-config-data\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.993339 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf766f7-3478-4448-a61b-8eff850eae70-internal-tls-certs\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.993477 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcf766f7-3478-4448-a61b-8eff850eae70-logs\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:29 crc kubenswrapper[4937]: I0123 06:55:29.994495 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf766f7-3478-4448-a61b-8eff850eae70-public-tls-certs\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:30 crc kubenswrapper[4937]: I0123 06:55:30.000905 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf766f7-3478-4448-a61b-8eff850eae70-config-data\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:30 crc kubenswrapper[4937]: I0123 06:55:30.000970 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf766f7-3478-4448-a61b-8eff850eae70-public-tls-certs\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:30 crc kubenswrapper[4937]: I0123 06:55:30.001656 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf766f7-3478-4448-a61b-8eff850eae70-combined-ca-bundle\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:30 crc kubenswrapper[4937]: I0123 06:55:30.002365 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf766f7-3478-4448-a61b-8eff850eae70-scripts\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:30 crc kubenswrapper[4937]: I0123 06:55:30.005244 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf766f7-3478-4448-a61b-8eff850eae70-internal-tls-certs\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:30 crc kubenswrapper[4937]: I0123 06:55:30.013291 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nvqs\" (UniqueName: \"kubernetes.io/projected/dcf766f7-3478-4448-a61b-8eff850eae70-kube-api-access-8nvqs\") pod \"placement-74dfd7457b-nnk7x\" (UID: \"dcf766f7-3478-4448-a61b-8eff850eae70\") " pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:30 crc kubenswrapper[4937]: I0123 06:55:30.098089 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:30 crc kubenswrapper[4937]: E0123 06:55:30.110717 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 23 06:55:30 crc kubenswrapper[4937]: E0123 06:55:30.112871 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 23 06:55:30 crc kubenswrapper[4937]: E0123 06:55:30.114250 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 23 06:55:30 crc kubenswrapper[4937]: E0123 06:55:30.114313 4937 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="51ed91bb-3033-4e77-8378-6cbdb382dc98" containerName="watcher-applier" Jan 23 06:55:30 crc kubenswrapper[4937]: I0123 06:55:30.361577 4937 generic.go:334] "Generic (PLEG): container finished" podID="91e31fb8-0b81-47c8-b12b-2c638d5b4e7e" containerID="5173248c8e459dc7c8fc25c1551694023b43677947f08bc3b450d1c9bb96351a" exitCode=143 Jan 23 06:55:30 crc kubenswrapper[4937]: I0123 06:55:30.361675 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e","Type":"ContainerDied","Data":"5173248c8e459dc7c8fc25c1551694023b43677947f08bc3b450d1c9bb96351a"} Jan 23 06:55:30 crc kubenswrapper[4937]: I0123 06:55:30.363961 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"438bef39-6283-4b17-b551-74f127660dbd","Type":"ContainerStarted","Data":"0a21ed04a6c7b07083eed48e0a769a5c3eed3ae3e4300abc645e2cf2eb2acc8f"} Jan 23 06:55:30 crc kubenswrapper[4937]: I0123 06:55:30.388053 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.3880350679999998 podStartE2EDuration="3.388035068s" podCreationTimestamp="2026-01-23 06:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:55:30.383456275 +0000 UTC m=+1330.187222928" watchObservedRunningTime="2026-01-23 06:55:30.388035068 +0000 UTC m=+1330.191801741" Jan 23 06:55:30 crc kubenswrapper[4937]: I0123 06:55:30.706682 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-74dfd7457b-nnk7x"] Jan 23 06:55:30 crc kubenswrapper[4937]: I0123 06:55:30.742336 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="91e31fb8-0b81-47c8-b12b-2c638d5b4e7e" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.169:9322/\": read tcp 10.217.0.2:36196->10.217.0.169:9322: read: connection reset by peer" Jan 23 06:55:30 crc kubenswrapper[4937]: I0123 06:55:30.744218 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="91e31fb8-0b81-47c8-b12b-2c638d5b4e7e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.169:9322/\": read tcp 10.217.0.2:36192->10.217.0.169:9322: read: connection reset by peer" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.261310 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.384868 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74dfd7457b-nnk7x" event={"ID":"dcf766f7-3478-4448-a61b-8eff850eae70","Type":"ContainerStarted","Data":"3426bf05012a12c5f7570d45ab046a4404460ada87539fde645001efa49ccb7e"} Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.384912 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74dfd7457b-nnk7x" event={"ID":"dcf766f7-3478-4448-a61b-8eff850eae70","Type":"ContainerStarted","Data":"77df40d5a121ed1d30cedc1470a41a8a193fcf587d098902b2492720aee40abb"} Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.391537 4937 generic.go:334] "Generic (PLEG): container finished" podID="91e31fb8-0b81-47c8-b12b-2c638d5b4e7e" containerID="8bd8683844cff8bb5bdd82c88faa4f015dfca3f93ee3c9de03e25d1efaa442e7" exitCode=0 Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.392986 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.393686 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e","Type":"ContainerDied","Data":"8bd8683844cff8bb5bdd82c88faa4f015dfca3f93ee3c9de03e25d1efaa442e7"} Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.393723 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e","Type":"ContainerDied","Data":"c17994327fa80b6e9d90cee3ec361fd66fd87b25d77ffd7b977fa2e62cecfef3"} Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.393748 4937 scope.go:117] "RemoveContainer" containerID="8bd8683844cff8bb5bdd82c88faa4f015dfca3f93ee3c9de03e25d1efaa442e7" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.425198 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-logs\") pod \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.425268 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gff66\" (UniqueName: \"kubernetes.io/projected/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-kube-api-access-gff66\") pod \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.425370 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-custom-prometheus-ca\") pod \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.425395 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-config-data\") pod \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.425424 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-combined-ca-bundle\") pod \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\" (UID: \"91e31fb8-0b81-47c8-b12b-2c638d5b4e7e\") " Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.428698 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-logs" (OuterVolumeSpecName: "logs") pod "91e31fb8-0b81-47c8-b12b-2c638d5b4e7e" (UID: "91e31fb8-0b81-47c8-b12b-2c638d5b4e7e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.429489 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.431867 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-kube-api-access-gff66" (OuterVolumeSpecName: "kube-api-access-gff66") pod "91e31fb8-0b81-47c8-b12b-2c638d5b4e7e" (UID: "91e31fb8-0b81-47c8-b12b-2c638d5b4e7e"). InnerVolumeSpecName "kube-api-access-gff66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.433342 4937 scope.go:117] "RemoveContainer" containerID="5173248c8e459dc7c8fc25c1551694023b43677947f08bc3b450d1c9bb96351a" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.467926 4937 scope.go:117] "RemoveContainer" containerID="8bd8683844cff8bb5bdd82c88faa4f015dfca3f93ee3c9de03e25d1efaa442e7" Jan 23 06:55:31 crc kubenswrapper[4937]: E0123 06:55:31.468863 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bd8683844cff8bb5bdd82c88faa4f015dfca3f93ee3c9de03e25d1efaa442e7\": container with ID starting with 8bd8683844cff8bb5bdd82c88faa4f015dfca3f93ee3c9de03e25d1efaa442e7 not found: ID does not exist" containerID="8bd8683844cff8bb5bdd82c88faa4f015dfca3f93ee3c9de03e25d1efaa442e7" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.468902 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bd8683844cff8bb5bdd82c88faa4f015dfca3f93ee3c9de03e25d1efaa442e7"} err="failed to get container status \"8bd8683844cff8bb5bdd82c88faa4f015dfca3f93ee3c9de03e25d1efaa442e7\": rpc error: code = NotFound desc = could not find container \"8bd8683844cff8bb5bdd82c88faa4f015dfca3f93ee3c9de03e25d1efaa442e7\": container with ID starting with 8bd8683844cff8bb5bdd82c88faa4f015dfca3f93ee3c9de03e25d1efaa442e7 not found: ID does not exist" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.468927 4937 scope.go:117] "RemoveContainer" containerID="5173248c8e459dc7c8fc25c1551694023b43677947f08bc3b450d1c9bb96351a" Jan 23 06:55:31 crc kubenswrapper[4937]: E0123 06:55:31.471913 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5173248c8e459dc7c8fc25c1551694023b43677947f08bc3b450d1c9bb96351a\": container with ID starting with 5173248c8e459dc7c8fc25c1551694023b43677947f08bc3b450d1c9bb96351a not found: ID does not exist" containerID="5173248c8e459dc7c8fc25c1551694023b43677947f08bc3b450d1c9bb96351a" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.471942 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5173248c8e459dc7c8fc25c1551694023b43677947f08bc3b450d1c9bb96351a"} err="failed to get container status \"5173248c8e459dc7c8fc25c1551694023b43677947f08bc3b450d1c9bb96351a\": rpc error: code = NotFound desc = could not find container \"5173248c8e459dc7c8fc25c1551694023b43677947f08bc3b450d1c9bb96351a\": container with ID starting with 5173248c8e459dc7c8fc25c1551694023b43677947f08bc3b450d1c9bb96351a not found: ID does not exist" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.485948 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-config-data" (OuterVolumeSpecName: "config-data") pod "91e31fb8-0b81-47c8-b12b-2c638d5b4e7e" (UID: "91e31fb8-0b81-47c8-b12b-2c638d5b4e7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.500640 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "91e31fb8-0b81-47c8-b12b-2c638d5b4e7e" (UID: "91e31fb8-0b81-47c8-b12b-2c638d5b4e7e"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.507744 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91e31fb8-0b81-47c8-b12b-2c638d5b4e7e" (UID: "91e31fb8-0b81-47c8-b12b-2c638d5b4e7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.531450 4937 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.531494 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.531507 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.531519 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gff66\" (UniqueName: \"kubernetes.io/projected/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e-kube-api-access-gff66\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.730517 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.740430 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.760744 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:55:31 crc kubenswrapper[4937]: E0123 06:55:31.761111 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e31fb8-0b81-47c8-b12b-2c638d5b4e7e" containerName="watcher-api-log" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.761128 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e31fb8-0b81-47c8-b12b-2c638d5b4e7e" containerName="watcher-api-log" Jan 23 06:55:31 crc kubenswrapper[4937]: E0123 06:55:31.761156 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e31fb8-0b81-47c8-b12b-2c638d5b4e7e" containerName="watcher-api" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.761162 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e31fb8-0b81-47c8-b12b-2c638d5b4e7e" containerName="watcher-api" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.761321 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e31fb8-0b81-47c8-b12b-2c638d5b4e7e" containerName="watcher-api-log" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.761343 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e31fb8-0b81-47c8-b12b-2c638d5b4e7e" containerName="watcher-api" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.762290 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.765643 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.765951 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.767878 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.778697 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.836878 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/732dc8eb-7c57-435d-83f0-375b1a792dd7-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.837353 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/732dc8eb-7c57-435d-83f0-375b1a792dd7-config-data\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.837439 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8fss\" (UniqueName: \"kubernetes.io/projected/732dc8eb-7c57-435d-83f0-375b1a792dd7-kube-api-access-x8fss\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.837470 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/732dc8eb-7c57-435d-83f0-375b1a792dd7-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.837553 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/732dc8eb-7c57-435d-83f0-375b1a792dd7-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.837632 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/732dc8eb-7c57-435d-83f0-375b1a792dd7-public-tls-certs\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.837676 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/732dc8eb-7c57-435d-83f0-375b1a792dd7-logs\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.939035 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/732dc8eb-7c57-435d-83f0-375b1a792dd7-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.939097 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/732dc8eb-7c57-435d-83f0-375b1a792dd7-public-tls-certs\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.939130 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/732dc8eb-7c57-435d-83f0-375b1a792dd7-logs\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.939164 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/732dc8eb-7c57-435d-83f0-375b1a792dd7-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.939247 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/732dc8eb-7c57-435d-83f0-375b1a792dd7-config-data\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.939269 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8fss\" (UniqueName: \"kubernetes.io/projected/732dc8eb-7c57-435d-83f0-375b1a792dd7-kube-api-access-x8fss\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.939287 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/732dc8eb-7c57-435d-83f0-375b1a792dd7-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.940532 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/732dc8eb-7c57-435d-83f0-375b1a792dd7-logs\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.943358 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/732dc8eb-7c57-435d-83f0-375b1a792dd7-public-tls-certs\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.943670 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/732dc8eb-7c57-435d-83f0-375b1a792dd7-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.945494 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/732dc8eb-7c57-435d-83f0-375b1a792dd7-config-data\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.946840 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/732dc8eb-7c57-435d-83f0-375b1a792dd7-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.950239 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/732dc8eb-7c57-435d-83f0-375b1a792dd7-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:31 crc kubenswrapper[4937]: I0123 06:55:31.956321 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8fss\" (UniqueName: \"kubernetes.io/projected/732dc8eb-7c57-435d-83f0-375b1a792dd7-kube-api-access-x8fss\") pod \"watcher-api-0\" (UID: \"732dc8eb-7c57-435d-83f0-375b1a792dd7\") " pod="openstack/watcher-api-0" Jan 23 06:55:32 crc kubenswrapper[4937]: I0123 06:55:32.084401 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 06:55:32 crc kubenswrapper[4937]: I0123 06:55:32.425690 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74dfd7457b-nnk7x" event={"ID":"dcf766f7-3478-4448-a61b-8eff850eae70","Type":"ContainerStarted","Data":"ef61aaa232b27a3ddbac6bb24d8662eb6994cd4cb631af1544872a09a8d715fe"} Jan 23 06:55:32 crc kubenswrapper[4937]: I0123 06:55:32.425832 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:32 crc kubenswrapper[4937]: I0123 06:55:32.462478 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-74dfd7457b-nnk7x" podStartSLOduration=3.462455843 podStartE2EDuration="3.462455843s" podCreationTimestamp="2026-01-23 06:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:55:32.452459031 +0000 UTC m=+1332.256225684" watchObservedRunningTime="2026-01-23 06:55:32.462455843 +0000 UTC m=+1332.266222496" Jan 23 06:55:32 crc kubenswrapper[4937]: I0123 06:55:32.539536 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91e31fb8-0b81-47c8-b12b-2c638d5b4e7e" path="/var/lib/kubelet/pods/91e31fb8-0b81-47c8-b12b-2c638d5b4e7e/volumes" Jan 23 06:55:32 crc kubenswrapper[4937]: I0123 06:55:32.738436 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 23 06:55:33 crc kubenswrapper[4937]: I0123 06:55:33.439964 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"732dc8eb-7c57-435d-83f0-375b1a792dd7","Type":"ContainerStarted","Data":"cfedfb725cdefc97f73e29ae98c8dc9dfab381513fa2659cba392d886bffeec4"} Jan 23 06:55:33 crc kubenswrapper[4937]: I0123 06:55:33.440311 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"732dc8eb-7c57-435d-83f0-375b1a792dd7","Type":"ContainerStarted","Data":"ef5aae4c5970cf861357d6e15184cfe9c51e66e71f401d860dd5cc0eb7456ee5"} Jan 23 06:55:33 crc kubenswrapper[4937]: I0123 06:55:33.440341 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"732dc8eb-7c57-435d-83f0-375b1a792dd7","Type":"ContainerStarted","Data":"78f71b6e429a17107380c564da23138c4e2d8f3b560370fd6ffa6b5483609e87"} Jan 23 06:55:33 crc kubenswrapper[4937]: I0123 06:55:33.440367 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 23 06:55:33 crc kubenswrapper[4937]: I0123 06:55:33.443203 4937 generic.go:334] "Generic (PLEG): container finished" podID="438bef39-6283-4b17-b551-74f127660dbd" containerID="0a21ed04a6c7b07083eed48e0a769a5c3eed3ae3e4300abc645e2cf2eb2acc8f" exitCode=1 Jan 23 06:55:33 crc kubenswrapper[4937]: I0123 06:55:33.443813 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"438bef39-6283-4b17-b551-74f127660dbd","Type":"ContainerDied","Data":"0a21ed04a6c7b07083eed48e0a769a5c3eed3ae3e4300abc645e2cf2eb2acc8f"} Jan 23 06:55:33 crc kubenswrapper[4937]: I0123 06:55:33.446759 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:55:33 crc kubenswrapper[4937]: I0123 06:55:33.447391 4937 scope.go:117] "RemoveContainer" containerID="0a21ed04a6c7b07083eed48e0a769a5c3eed3ae3e4300abc645e2cf2eb2acc8f" Jan 23 06:55:33 crc kubenswrapper[4937]: I0123 06:55:33.467689 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.467667708 podStartE2EDuration="2.467667708s" podCreationTimestamp="2026-01-23 06:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:55:33.460788402 +0000 UTC m=+1333.264555065" watchObservedRunningTime="2026-01-23 06:55:33.467667708 +0000 UTC m=+1333.271434361" Jan 23 06:55:35 crc kubenswrapper[4937]: E0123 06:55:35.110207 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 23 06:55:35 crc kubenswrapper[4937]: E0123 06:55:35.111742 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 23 06:55:35 crc kubenswrapper[4937]: E0123 06:55:35.113391 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 23 06:55:35 crc kubenswrapper[4937]: E0123 06:55:35.113430 4937 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="51ed91bb-3033-4e77-8378-6cbdb382dc98" containerName="watcher-applier" Jan 23 06:55:35 crc kubenswrapper[4937]: I0123 06:55:35.772176 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 23 06:55:36 crc kubenswrapper[4937]: I0123 06:55:36.125327 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:55:36 crc kubenswrapper[4937]: I0123 06:55:36.217368 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:55:36 crc kubenswrapper[4937]: I0123 06:55:36.478307 4937 generic.go:334] "Generic (PLEG): container finished" podID="47488e25-41b9-46e1-8ad7-5cfe2c4654c7" containerID="1a63be59397fb453f40877ce49a3d597cdad81a46ed53136ff3711fa90499d36" exitCode=0 Jan 23 06:55:36 crc kubenswrapper[4937]: I0123 06:55:36.478377 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gkrpw" event={"ID":"47488e25-41b9-46e1-8ad7-5cfe2c4654c7","Type":"ContainerDied","Data":"1a63be59397fb453f40877ce49a3d597cdad81a46ed53136ff3711fa90499d36"} Jan 23 06:55:37 crc kubenswrapper[4937]: I0123 06:55:37.086078 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 23 06:55:37 crc kubenswrapper[4937]: I0123 06:55:37.972690 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:55:38 crc kubenswrapper[4937]: I0123 06:55:38.052290 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 06:55:38 crc kubenswrapper[4937]: I0123 06:55:38.052836 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 06:55:38 crc kubenswrapper[4937]: I0123 06:55:38.331801 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-57b9fd85d8-54qml" Jan 23 06:55:38 crc kubenswrapper[4937]: I0123 06:55:38.389970 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8676986cc8-dkgvq"] Jan 23 06:55:38 crc kubenswrapper[4937]: I0123 06:55:38.493832 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8676986cc8-dkgvq" podUID="d71c9df1-567e-4dd0-be98-fb63b23ebca7" containerName="horizon-log" containerID="cri-o://f8615ffc4bfe0609ab386f7941cbb3d5f1c50fe34fdfba0598dda6d28e799390" gracePeriod=30 Jan 23 06:55:38 crc kubenswrapper[4937]: I0123 06:55:38.494472 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8676986cc8-dkgvq" podUID="d71c9df1-567e-4dd0-be98-fb63b23ebca7" containerName="horizon" containerID="cri-o://621835b9c002353487666efebf8df80797fcdb73c9de4fbd3288c923e4dcede6" gracePeriod=30 Jan 23 06:55:39 crc kubenswrapper[4937]: I0123 06:55:39.523426 4937 generic.go:334] "Generic (PLEG): container finished" podID="d71c9df1-567e-4dd0-be98-fb63b23ebca7" containerID="621835b9c002353487666efebf8df80797fcdb73c9de4fbd3288c923e4dcede6" exitCode=0 Jan 23 06:55:39 crc kubenswrapper[4937]: I0123 06:55:39.523481 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8676986cc8-dkgvq" event={"ID":"d71c9df1-567e-4dd0-be98-fb63b23ebca7","Type":"ContainerDied","Data":"621835b9c002353487666efebf8df80797fcdb73c9de4fbd3288c923e4dcede6"} Jan 23 06:55:39 crc kubenswrapper[4937]: I0123 06:55:39.528406 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-gkrpw" event={"ID":"47488e25-41b9-46e1-8ad7-5cfe2c4654c7","Type":"ContainerDied","Data":"d18556cba06a73768d2c0ab3985edd9a47ecceff715f5214e29c0dce4ae7465c"} Jan 23 06:55:39 crc kubenswrapper[4937]: I0123 06:55:39.528442 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d18556cba06a73768d2c0ab3985edd9a47ecceff715f5214e29c0dce4ae7465c" Jan 23 06:55:39 crc kubenswrapper[4937]: I0123 06:55:39.574669 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gkrpw" Jan 23 06:55:39 crc kubenswrapper[4937]: I0123 06:55:39.648441 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47488e25-41b9-46e1-8ad7-5cfe2c4654c7-combined-ca-bundle\") pod \"47488e25-41b9-46e1-8ad7-5cfe2c4654c7\" (UID: \"47488e25-41b9-46e1-8ad7-5cfe2c4654c7\") " Jan 23 06:55:39 crc kubenswrapper[4937]: I0123 06:55:39.648704 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn7nk\" (UniqueName: \"kubernetes.io/projected/47488e25-41b9-46e1-8ad7-5cfe2c4654c7-kube-api-access-vn7nk\") pod \"47488e25-41b9-46e1-8ad7-5cfe2c4654c7\" (UID: \"47488e25-41b9-46e1-8ad7-5cfe2c4654c7\") " Jan 23 06:55:39 crc kubenswrapper[4937]: I0123 06:55:39.648757 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47488e25-41b9-46e1-8ad7-5cfe2c4654c7-db-sync-config-data\") pod \"47488e25-41b9-46e1-8ad7-5cfe2c4654c7\" (UID: \"47488e25-41b9-46e1-8ad7-5cfe2c4654c7\") " Jan 23 06:55:39 crc kubenswrapper[4937]: I0123 06:55:39.655692 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47488e25-41b9-46e1-8ad7-5cfe2c4654c7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "47488e25-41b9-46e1-8ad7-5cfe2c4654c7" (UID: "47488e25-41b9-46e1-8ad7-5cfe2c4654c7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:39 crc kubenswrapper[4937]: I0123 06:55:39.658510 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47488e25-41b9-46e1-8ad7-5cfe2c4654c7-kube-api-access-vn7nk" (OuterVolumeSpecName: "kube-api-access-vn7nk") pod "47488e25-41b9-46e1-8ad7-5cfe2c4654c7" (UID: "47488e25-41b9-46e1-8ad7-5cfe2c4654c7"). InnerVolumeSpecName "kube-api-access-vn7nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:39 crc kubenswrapper[4937]: I0123 06:55:39.686721 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47488e25-41b9-46e1-8ad7-5cfe2c4654c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47488e25-41b9-46e1-8ad7-5cfe2c4654c7" (UID: "47488e25-41b9-46e1-8ad7-5cfe2c4654c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:39 crc kubenswrapper[4937]: I0123 06:55:39.751326 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47488e25-41b9-46e1-8ad7-5cfe2c4654c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:39 crc kubenswrapper[4937]: I0123 06:55:39.751367 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn7nk\" (UniqueName: \"kubernetes.io/projected/47488e25-41b9-46e1-8ad7-5cfe2c4654c7-kube-api-access-vn7nk\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:39 crc kubenswrapper[4937]: I0123 06:55:39.751381 4937 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47488e25-41b9-46e1-8ad7-5cfe2c4654c7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:40 crc kubenswrapper[4937]: E0123 06:55:40.117257 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 23 06:55:40 crc kubenswrapper[4937]: E0123 06:55:40.124709 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 23 06:55:40 crc kubenswrapper[4937]: E0123 06:55:40.126357 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 23 06:55:40 crc kubenswrapper[4937]: E0123 06:55:40.126412 4937 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="51ed91bb-3033-4e77-8378-6cbdb382dc98" containerName="watcher-applier" Jan 23 06:55:40 crc kubenswrapper[4937]: E0123 06:55:40.484538 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:77a8c8d5b4c2f289ee77953d53824dcecffc2c2128da9d18233f50bd32061180: Get \\\"http://38.102.83.44:5001/v2/podified-master-centos10/openstack-ceilometer-central/blobs/sha256:77a8c8d5b4c2f289ee77953d53824dcecffc2c2128da9d18233f50bd32061180\\\": context canceled\"" pod="openstack/ceilometer-0" podUID="d82e186e-8995-43ff-a65f-a1918f8495bf" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.541650 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"438bef39-6283-4b17-b551-74f127660dbd","Type":"ContainerStarted","Data":"3f56ab8a8e7d0455ce48848279cc49245b73910866cf9d81886ac5f4cd16962a"} Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.566109 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-gkrpw" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.567219 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d82e186e-8995-43ff-a65f-a1918f8495bf" containerName="ceilometer-notification-agent" containerID="cri-o://06a4d7e08f84c2ae242fe9e1f6c6c8470b0636295885dbd9e74b594ba5dda612" gracePeriod=30 Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.567365 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d82e186e-8995-43ff-a65f-a1918f8495bf","Type":"ContainerStarted","Data":"bfccc14221b0d9a4f1750c7e8735fe2845ed32f2544207e4650092452433a2ed"} Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.567638 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.569063 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d82e186e-8995-43ff-a65f-a1918f8495bf" containerName="proxy-httpd" containerID="cri-o://bfccc14221b0d9a4f1750c7e8735fe2845ed32f2544207e4650092452433a2ed" gracePeriod=30 Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.569207 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d82e186e-8995-43ff-a65f-a1918f8495bf" containerName="sg-core" containerID="cri-o://409893e7fa7d1be8743a91425df85b01f8059c06ffee495a68d6b638e0c2fc3f" gracePeriod=30 Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.869464 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7fb8d96c46-xq9qn"] Jan 23 06:55:40 crc kubenswrapper[4937]: E0123 06:55:40.870277 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47488e25-41b9-46e1-8ad7-5cfe2c4654c7" containerName="barbican-db-sync" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.870297 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="47488e25-41b9-46e1-8ad7-5cfe2c4654c7" containerName="barbican-db-sync" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.870560 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="47488e25-41b9-46e1-8ad7-5cfe2c4654c7" containerName="barbican-db-sync" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.872073 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.875184 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-647886d85c-p2mdd"] Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.876745 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-647886d85c-p2mdd" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.883144 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.883452 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xp7hm" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.883663 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.883837 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.907373 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-647886d85c-p2mdd"] Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.924576 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7fb8d96c46-xq9qn"] Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.980903 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-df4b78dcc-w6snw"] Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.993141 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d32607-f131-4998-b179-f60612068c4a-config-data\") pod \"barbican-worker-647886d85c-p2mdd\" (UID: \"f7d32607-f131-4998-b179-f60612068c4a\") " pod="openstack/barbican-worker-647886d85c-p2mdd" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.993269 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwdr6\" (UniqueName: \"kubernetes.io/projected/f7d32607-f131-4998-b179-f60612068c4a-kube-api-access-lwdr6\") pod \"barbican-worker-647886d85c-p2mdd\" (UID: \"f7d32607-f131-4998-b179-f60612068c4a\") " pod="openstack/barbican-worker-647886d85c-p2mdd" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.993303 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzx6v\" (UniqueName: \"kubernetes.io/projected/b1b9c727-287f-4b30-98d8-1706ca360e73-kube-api-access-gzx6v\") pod \"barbican-keystone-listener-7fb8d96c46-xq9qn\" (UID: \"b1b9c727-287f-4b30-98d8-1706ca360e73\") " pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.993380 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b9c727-287f-4b30-98d8-1706ca360e73-config-data\") pod \"barbican-keystone-listener-7fb8d96c46-xq9qn\" (UID: \"b1b9c727-287f-4b30-98d8-1706ca360e73\") " pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.993402 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b9c727-287f-4b30-98d8-1706ca360e73-combined-ca-bundle\") pod \"barbican-keystone-listener-7fb8d96c46-xq9qn\" (UID: \"b1b9c727-287f-4b30-98d8-1706ca360e73\") " pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.993464 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7d32607-f131-4998-b179-f60612068c4a-logs\") pod \"barbican-worker-647886d85c-p2mdd\" (UID: \"f7d32607-f131-4998-b179-f60612068c4a\") " pod="openstack/barbican-worker-647886d85c-p2mdd" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.993516 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7d32607-f131-4998-b179-f60612068c4a-config-data-custom\") pod \"barbican-worker-647886d85c-p2mdd\" (UID: \"f7d32607-f131-4998-b179-f60612068c4a\") " pod="openstack/barbican-worker-647886d85c-p2mdd" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.993562 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1b9c727-287f-4b30-98d8-1706ca360e73-logs\") pod \"barbican-keystone-listener-7fb8d96c46-xq9qn\" (UID: \"b1b9c727-287f-4b30-98d8-1706ca360e73\") " pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.993583 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d32607-f131-4998-b179-f60612068c4a-combined-ca-bundle\") pod \"barbican-worker-647886d85c-p2mdd\" (UID: \"f7d32607-f131-4998-b179-f60612068c4a\") " pod="openstack/barbican-worker-647886d85c-p2mdd" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.993626 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1b9c727-287f-4b30-98d8-1706ca360e73-config-data-custom\") pod \"barbican-keystone-listener-7fb8d96c46-xq9qn\" (UID: \"b1b9c727-287f-4b30-98d8-1706ca360e73\") " pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" Jan 23 06:55:40 crc kubenswrapper[4937]: I0123 06:55:40.996271 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.025936 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-df4b78dcc-w6snw"] Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.093703 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5647cc4676-mx7tx"] Jan 23 06:55:41 crc kubenswrapper[4937]: W0123 06:55:41.093787 4937 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod438bef39_6283_4b17_b551_74f127660dbd.slice/crio-0a21ed04a6c7b07083eed48e0a769a5c3eed3ae3e4300abc645e2cf2eb2acc8f.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod438bef39_6283_4b17_b551_74f127660dbd.slice/crio-0a21ed04a6c7b07083eed48e0a769a5c3eed3ae3e4300abc645e2cf2eb2acc8f.scope: no such file or directory Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.095185 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.098151 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.100776 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-dns-svc\") pod \"dnsmasq-dns-df4b78dcc-w6snw\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.100838 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1b9c727-287f-4b30-98d8-1706ca360e73-logs\") pod \"barbican-keystone-listener-7fb8d96c46-xq9qn\" (UID: \"b1b9c727-287f-4b30-98d8-1706ca360e73\") " pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.100859 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d32607-f131-4998-b179-f60612068c4a-combined-ca-bundle\") pod \"barbican-worker-647886d85c-p2mdd\" (UID: \"f7d32607-f131-4998-b179-f60612068c4a\") " pod="openstack/barbican-worker-647886d85c-p2mdd" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.100881 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1b9c727-287f-4b30-98d8-1706ca360e73-config-data-custom\") pod \"barbican-keystone-listener-7fb8d96c46-xq9qn\" (UID: \"b1b9c727-287f-4b30-98d8-1706ca360e73\") " pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.100907 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-config\") pod \"dnsmasq-dns-df4b78dcc-w6snw\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.101013 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d32607-f131-4998-b179-f60612068c4a-config-data\") pod \"barbican-worker-647886d85c-p2mdd\" (UID: \"f7d32607-f131-4998-b179-f60612068c4a\") " pod="openstack/barbican-worker-647886d85c-p2mdd" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.101028 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-dns-swift-storage-0\") pod \"dnsmasq-dns-df4b78dcc-w6snw\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.101105 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwdr6\" (UniqueName: \"kubernetes.io/projected/f7d32607-f131-4998-b179-f60612068c4a-kube-api-access-lwdr6\") pod \"barbican-worker-647886d85c-p2mdd\" (UID: \"f7d32607-f131-4998-b179-f60612068c4a\") " pod="openstack/barbican-worker-647886d85c-p2mdd" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.101127 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzx6v\" (UniqueName: \"kubernetes.io/projected/b1b9c727-287f-4b30-98d8-1706ca360e73-kube-api-access-gzx6v\") pod \"barbican-keystone-listener-7fb8d96c46-xq9qn\" (UID: \"b1b9c727-287f-4b30-98d8-1706ca360e73\") " pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.101165 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-ovsdbserver-sb\") pod \"dnsmasq-dns-df4b78dcc-w6snw\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.101193 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b9c727-287f-4b30-98d8-1706ca360e73-config-data\") pod \"barbican-keystone-listener-7fb8d96c46-xq9qn\" (UID: \"b1b9c727-287f-4b30-98d8-1706ca360e73\") " pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.101211 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b9c727-287f-4b30-98d8-1706ca360e73-combined-ca-bundle\") pod \"barbican-keystone-listener-7fb8d96c46-xq9qn\" (UID: \"b1b9c727-287f-4b30-98d8-1706ca360e73\") " pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.101252 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-ovsdbserver-nb\") pod \"dnsmasq-dns-df4b78dcc-w6snw\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.101273 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7d32607-f131-4998-b179-f60612068c4a-logs\") pod \"barbican-worker-647886d85c-p2mdd\" (UID: \"f7d32607-f131-4998-b179-f60612068c4a\") " pod="openstack/barbican-worker-647886d85c-p2mdd" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.101308 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86bfk\" (UniqueName: \"kubernetes.io/projected/5633b05a-b392-484d-90ed-4ff807c6f780-kube-api-access-86bfk\") pod \"dnsmasq-dns-df4b78dcc-w6snw\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.101326 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7d32607-f131-4998-b179-f60612068c4a-config-data-custom\") pod \"barbican-worker-647886d85c-p2mdd\" (UID: \"f7d32607-f131-4998-b179-f60612068c4a\") " pod="openstack/barbican-worker-647886d85c-p2mdd" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.106638 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5647cc4676-mx7tx"] Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.108976 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1b9c727-287f-4b30-98d8-1706ca360e73-logs\") pod \"barbican-keystone-listener-7fb8d96c46-xq9qn\" (UID: \"b1b9c727-287f-4b30-98d8-1706ca360e73\") " pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.113366 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7d32607-f131-4998-b179-f60612068c4a-logs\") pod \"barbican-worker-647886d85c-p2mdd\" (UID: \"f7d32607-f131-4998-b179-f60612068c4a\") " pod="openstack/barbican-worker-647886d85c-p2mdd" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.116633 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1b9c727-287f-4b30-98d8-1706ca360e73-config-data-custom\") pod \"barbican-keystone-listener-7fb8d96c46-xq9qn\" (UID: \"b1b9c727-287f-4b30-98d8-1706ca360e73\") " pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.121386 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d32607-f131-4998-b179-f60612068c4a-config-data\") pod \"barbican-worker-647886d85c-p2mdd\" (UID: \"f7d32607-f131-4998-b179-f60612068c4a\") " pod="openstack/barbican-worker-647886d85c-p2mdd" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.126532 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b9c727-287f-4b30-98d8-1706ca360e73-combined-ca-bundle\") pod \"barbican-keystone-listener-7fb8d96c46-xq9qn\" (UID: \"b1b9c727-287f-4b30-98d8-1706ca360e73\") " pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.128419 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d32607-f131-4998-b179-f60612068c4a-combined-ca-bundle\") pod \"barbican-worker-647886d85c-p2mdd\" (UID: \"f7d32607-f131-4998-b179-f60612068c4a\") " pod="openstack/barbican-worker-647886d85c-p2mdd" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.129859 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwdr6\" (UniqueName: \"kubernetes.io/projected/f7d32607-f131-4998-b179-f60612068c4a-kube-api-access-lwdr6\") pod \"barbican-worker-647886d85c-p2mdd\" (UID: \"f7d32607-f131-4998-b179-f60612068c4a\") " pod="openstack/barbican-worker-647886d85c-p2mdd" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.132581 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b9c727-287f-4b30-98d8-1706ca360e73-config-data\") pod \"barbican-keystone-listener-7fb8d96c46-xq9qn\" (UID: \"b1b9c727-287f-4b30-98d8-1706ca360e73\") " pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.143955 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzx6v\" (UniqueName: \"kubernetes.io/projected/b1b9c727-287f-4b30-98d8-1706ca360e73-kube-api-access-gzx6v\") pod \"barbican-keystone-listener-7fb8d96c46-xq9qn\" (UID: \"b1b9c727-287f-4b30-98d8-1706ca360e73\") " pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.148214 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f7d32607-f131-4998-b179-f60612068c4a-config-data-custom\") pod \"barbican-worker-647886d85c-p2mdd\" (UID: \"f7d32607-f131-4998-b179-f60612068c4a\") " pod="openstack/barbican-worker-647886d85c-p2mdd" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.203437 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-ovsdbserver-nb\") pod \"dnsmasq-dns-df4b78dcc-w6snw\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.203493 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e27f321d-6357-47df-a057-31c9036f8ec4-logs\") pod \"barbican-api-5647cc4676-mx7tx\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.203520 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86bfk\" (UniqueName: \"kubernetes.io/projected/5633b05a-b392-484d-90ed-4ff807c6f780-kube-api-access-86bfk\") pod \"dnsmasq-dns-df4b78dcc-w6snw\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.203544 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-dns-svc\") pod \"dnsmasq-dns-df4b78dcc-w6snw\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.203579 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-config\") pod \"dnsmasq-dns-df4b78dcc-w6snw\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.203620 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e27f321d-6357-47df-a057-31c9036f8ec4-config-data-custom\") pod \"barbican-api-5647cc4676-mx7tx\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.203675 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-dns-swift-storage-0\") pod \"dnsmasq-dns-df4b78dcc-w6snw\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.203700 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27f321d-6357-47df-a057-31c9036f8ec4-combined-ca-bundle\") pod \"barbican-api-5647cc4676-mx7tx\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.203726 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs6c6\" (UniqueName: \"kubernetes.io/projected/e27f321d-6357-47df-a057-31c9036f8ec4-kube-api-access-cs6c6\") pod \"barbican-api-5647cc4676-mx7tx\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.203761 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27f321d-6357-47df-a057-31c9036f8ec4-config-data\") pod \"barbican-api-5647cc4676-mx7tx\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.203782 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-ovsdbserver-sb\") pod \"dnsmasq-dns-df4b78dcc-w6snw\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.205764 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-config\") pod \"dnsmasq-dns-df4b78dcc-w6snw\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.205949 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-ovsdbserver-sb\") pod \"dnsmasq-dns-df4b78dcc-w6snw\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.205977 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-dns-swift-storage-0\") pod \"dnsmasq-dns-df4b78dcc-w6snw\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.206029 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-ovsdbserver-nb\") pod \"dnsmasq-dns-df4b78dcc-w6snw\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.206665 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-dns-svc\") pod \"dnsmasq-dns-df4b78dcc-w6snw\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.212645 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.224281 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86bfk\" (UniqueName: \"kubernetes.io/projected/5633b05a-b392-484d-90ed-4ff807c6f780-kube-api-access-86bfk\") pod \"dnsmasq-dns-df4b78dcc-w6snw\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.224648 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-647886d85c-p2mdd" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.304983 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e27f321d-6357-47df-a057-31c9036f8ec4-config-data-custom\") pod \"barbican-api-5647cc4676-mx7tx\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.305081 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27f321d-6357-47df-a057-31c9036f8ec4-combined-ca-bundle\") pod \"barbican-api-5647cc4676-mx7tx\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.305118 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs6c6\" (UniqueName: \"kubernetes.io/projected/e27f321d-6357-47df-a057-31c9036f8ec4-kube-api-access-cs6c6\") pod \"barbican-api-5647cc4676-mx7tx\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.305170 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27f321d-6357-47df-a057-31c9036f8ec4-config-data\") pod \"barbican-api-5647cc4676-mx7tx\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.305233 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e27f321d-6357-47df-a057-31c9036f8ec4-logs\") pod \"barbican-api-5647cc4676-mx7tx\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.306156 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e27f321d-6357-47df-a057-31c9036f8ec4-logs\") pod \"barbican-api-5647cc4676-mx7tx\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.310159 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e27f321d-6357-47df-a057-31c9036f8ec4-config-data-custom\") pod \"barbican-api-5647cc4676-mx7tx\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.311383 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27f321d-6357-47df-a057-31c9036f8ec4-combined-ca-bundle\") pod \"barbican-api-5647cc4676-mx7tx\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.332119 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs6c6\" (UniqueName: \"kubernetes.io/projected/e27f321d-6357-47df-a057-31c9036f8ec4-kube-api-access-cs6c6\") pod \"barbican-api-5647cc4676-mx7tx\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.336758 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27f321d-6357-47df-a057-31c9036f8ec4-config-data\") pod \"barbican-api-5647cc4676-mx7tx\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.340577 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:41 crc kubenswrapper[4937]: E0123 06:55:41.558757 4937 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf77dc563_57f2_4c47_a627_98d15343173b.slice/crio-conmon-1f2ee790343d87ce9795030e705dbac913df0eea1e8b8d884c2493899ec3767f.scope\": RecentStats: unable to find data in memory cache]" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.601834 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2n4zb" event={"ID":"e02394c2-2975-4589-9670-7c69fa89cb1d","Type":"ContainerStarted","Data":"80443c55ed14927bc9a1cbd967a29c2f5b3f944226efa99da34d17629e5d73ce"} Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.613912 4937 generic.go:334] "Generic (PLEG): container finished" podID="f77dc563-57f2-4c47-a627-98d15343173b" containerID="1f2ee790343d87ce9795030e705dbac913df0eea1e8b8d884c2493899ec3767f" exitCode=0 Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.614010 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zjhvz" event={"ID":"f77dc563-57f2-4c47-a627-98d15343173b","Type":"ContainerDied","Data":"1f2ee790343d87ce9795030e705dbac913df0eea1e8b8d884c2493899ec3767f"} Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.620738 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-2n4zb" podStartSLOduration=3.290168812 podStartE2EDuration="1m32.620724405s" podCreationTimestamp="2026-01-23 06:54:09 +0000 UTC" firstStartedPulling="2026-01-23 06:54:10.811130093 +0000 UTC m=+1250.614896746" lastFinishedPulling="2026-01-23 06:55:40.141685686 +0000 UTC m=+1339.945452339" observedRunningTime="2026-01-23 06:55:41.615396251 +0000 UTC m=+1341.419162904" watchObservedRunningTime="2026-01-23 06:55:41.620724405 +0000 UTC m=+1341.424491058" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.633465 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.645368 4937 generic.go:334] "Generic (PLEG): container finished" podID="ed40d957-d6c3-48f4-9255-576482c38569" containerID="27d91b3555818bd81e643566c8a240ed3bcd084c150197ae71cf98d20293c7d6" exitCode=137 Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.645399 4937 generic.go:334] "Generic (PLEG): container finished" podID="ed40d957-d6c3-48f4-9255-576482c38569" containerID="3329a23cd43a4e10e8d5b714f5397f73f747d24c21e798071af624184c0f400c" exitCode=137 Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.645447 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-679cdd6dd9-c2c84" event={"ID":"ed40d957-d6c3-48f4-9255-576482c38569","Type":"ContainerDied","Data":"27d91b3555818bd81e643566c8a240ed3bcd084c150197ae71cf98d20293c7d6"} Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.645471 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-679cdd6dd9-c2c84" event={"ID":"ed40d957-d6c3-48f4-9255-576482c38569","Type":"ContainerDied","Data":"3329a23cd43a4e10e8d5b714f5397f73f747d24c21e798071af624184c0f400c"} Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.645483 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-679cdd6dd9-c2c84" event={"ID":"ed40d957-d6c3-48f4-9255-576482c38569","Type":"ContainerDied","Data":"bd72090f63cfa4c3e04402867dffc9c239355d63d51944ad21dc9ba75196a446"} Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.645494 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd72090f63cfa4c3e04402867dffc9c239355d63d51944ad21dc9ba75196a446" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.645995 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.683825 4937 generic.go:334] "Generic (PLEG): container finished" podID="d82e186e-8995-43ff-a65f-a1918f8495bf" containerID="409893e7fa7d1be8743a91425df85b01f8059c06ffee495a68d6b638e0c2fc3f" exitCode=2 Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.684001 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d82e186e-8995-43ff-a65f-a1918f8495bf","Type":"ContainerDied","Data":"409893e7fa7d1be8743a91425df85b01f8059c06ffee495a68d6b638e0c2fc3f"} Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.726262 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed40d957-d6c3-48f4-9255-576482c38569-logs\") pod \"ed40d957-d6c3-48f4-9255-576482c38569\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.726338 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed40d957-d6c3-48f4-9255-576482c38569-config-data\") pod \"ed40d957-d6c3-48f4-9255-576482c38569\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.726384 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed40d957-d6c3-48f4-9255-576482c38569-scripts\") pod \"ed40d957-d6c3-48f4-9255-576482c38569\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.726490 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdh7j\" (UniqueName: \"kubernetes.io/projected/ed40d957-d6c3-48f4-9255-576482c38569-kube-api-access-gdh7j\") pod \"ed40d957-d6c3-48f4-9255-576482c38569\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.726558 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed40d957-d6c3-48f4-9255-576482c38569-horizon-secret-key\") pod \"ed40d957-d6c3-48f4-9255-576482c38569\" (UID: \"ed40d957-d6c3-48f4-9255-576482c38569\") " Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.729158 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed40d957-d6c3-48f4-9255-576482c38569-logs" (OuterVolumeSpecName: "logs") pod "ed40d957-d6c3-48f4-9255-576482c38569" (UID: "ed40d957-d6c3-48f4-9255-576482c38569"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.732507 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed40d957-d6c3-48f4-9255-576482c38569-kube-api-access-gdh7j" (OuterVolumeSpecName: "kube-api-access-gdh7j") pod "ed40d957-d6c3-48f4-9255-576482c38569" (UID: "ed40d957-d6c3-48f4-9255-576482c38569"). InnerVolumeSpecName "kube-api-access-gdh7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.733825 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed40d957-d6c3-48f4-9255-576482c38569-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ed40d957-d6c3-48f4-9255-576482c38569" (UID: "ed40d957-d6c3-48f4-9255-576482c38569"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.758274 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed40d957-d6c3-48f4-9255-576482c38569-config-data" (OuterVolumeSpecName: "config-data") pod "ed40d957-d6c3-48f4-9255-576482c38569" (UID: "ed40d957-d6c3-48f4-9255-576482c38569"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.760920 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed40d957-d6c3-48f4-9255-576482c38569-scripts" (OuterVolumeSpecName: "scripts") pod "ed40d957-d6c3-48f4-9255-576482c38569" (UID: "ed40d957-d6c3-48f4-9255-576482c38569"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.828877 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdh7j\" (UniqueName: \"kubernetes.io/projected/ed40d957-d6c3-48f4-9255-576482c38569-kube-api-access-gdh7j\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.828908 4937 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed40d957-d6c3-48f4-9255-576482c38569-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.828919 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed40d957-d6c3-48f4-9255-576482c38569-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.828929 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed40d957-d6c3-48f4-9255-576482c38569-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.828939 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed40d957-d6c3-48f4-9255-576482c38569-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.872105 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-647886d85c-p2mdd"] Jan 23 06:55:41 crc kubenswrapper[4937]: I0123 06:55:41.969879 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7fb8d96c46-xq9qn"] Jan 23 06:55:41 crc kubenswrapper[4937]: W0123 06:55:41.985359 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1b9c727_287f_4b30_98d8_1706ca360e73.slice/crio-de910e878c8a43b7e1d23cb30476e2bd95416f23f9bfa82d11ecd870140627b1 WatchSource:0}: Error finding container de910e878c8a43b7e1d23cb30476e2bd95416f23f9bfa82d11ecd870140627b1: Status 404 returned error can't find the container with id de910e878c8a43b7e1d23cb30476e2bd95416f23f9bfa82d11ecd870140627b1 Jan 23 06:55:42 crc kubenswrapper[4937]: I0123 06:55:42.085775 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 23 06:55:42 crc kubenswrapper[4937]: I0123 06:55:42.094153 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 23 06:55:42 crc kubenswrapper[4937]: I0123 06:55:42.146746 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-df4b78dcc-w6snw"] Jan 23 06:55:42 crc kubenswrapper[4937]: I0123 06:55:42.274064 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5647cc4676-mx7tx"] Jan 23 06:55:42 crc kubenswrapper[4937]: W0123 06:55:42.282174 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode27f321d_6357_47df_a057_31c9036f8ec4.slice/crio-aec06672ed69fa64935a46e15b8e7e48f48e8d519701b1d7f4be7aafaab86510 WatchSource:0}: Error finding container aec06672ed69fa64935a46e15b8e7e48f48e8d519701b1d7f4be7aafaab86510: Status 404 returned error can't find the container with id aec06672ed69fa64935a46e15b8e7e48f48e8d519701b1d7f4be7aafaab86510 Jan 23 06:55:42 crc kubenswrapper[4937]: I0123 06:55:42.695687 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-647886d85c-p2mdd" event={"ID":"f7d32607-f131-4998-b179-f60612068c4a","Type":"ContainerStarted","Data":"b574ee4e36ead65788c09586cbb9eeb10af713c13c5d5888d02da0f7e92325f8"} Jan 23 06:55:42 crc kubenswrapper[4937]: I0123 06:55:42.716826 4937 generic.go:334] "Generic (PLEG): container finished" podID="f2537508-9450-4931-b4a4-d87cfdaa4a77" containerID="76b5749b3524d08f65f214cc3f03f07a43470a7421af911d91d8855745305eb1" exitCode=0 Jan 23 06:55:42 crc kubenswrapper[4937]: I0123 06:55:42.716917 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xgx9n" event={"ID":"f2537508-9450-4931-b4a4-d87cfdaa4a77","Type":"ContainerDied","Data":"76b5749b3524d08f65f214cc3f03f07a43470a7421af911d91d8855745305eb1"} Jan 23 06:55:42 crc kubenswrapper[4937]: I0123 06:55:42.749728 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" event={"ID":"b1b9c727-287f-4b30-98d8-1706ca360e73","Type":"ContainerStarted","Data":"de910e878c8a43b7e1d23cb30476e2bd95416f23f9bfa82d11ecd870140627b1"} Jan 23 06:55:42 crc kubenswrapper[4937]: I0123 06:55:42.823165 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5647cc4676-mx7tx" event={"ID":"e27f321d-6357-47df-a057-31c9036f8ec4","Type":"ContainerStarted","Data":"a68c23b26f0f48cd1c8dbe58b94e630124c9b0b9b4da18edbd4e8aeb36862718"} Jan 23 06:55:42 crc kubenswrapper[4937]: I0123 06:55:42.823223 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5647cc4676-mx7tx" event={"ID":"e27f321d-6357-47df-a057-31c9036f8ec4","Type":"ContainerStarted","Data":"aec06672ed69fa64935a46e15b8e7e48f48e8d519701b1d7f4be7aafaab86510"} Jan 23 06:55:42 crc kubenswrapper[4937]: I0123 06:55:42.866858 4937 generic.go:334] "Generic (PLEG): container finished" podID="5633b05a-b392-484d-90ed-4ff807c6f780" containerID="4905265ec312ddf5ad355d2df925bc535a8924adc18af9afed1a9067d30fbc85" exitCode=0 Jan 23 06:55:42 crc kubenswrapper[4937]: I0123 06:55:42.868650 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" event={"ID":"5633b05a-b392-484d-90ed-4ff807c6f780","Type":"ContainerDied","Data":"4905265ec312ddf5ad355d2df925bc535a8924adc18af9afed1a9067d30fbc85"} Jan 23 06:55:42 crc kubenswrapper[4937]: I0123 06:55:42.868679 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" event={"ID":"5633b05a-b392-484d-90ed-4ff807c6f780","Type":"ContainerStarted","Data":"6ac50232a514d769958eed4ff2adcfe2a72227222eb311bf7fe1bd2f7a54be1d"} Jan 23 06:55:42 crc kubenswrapper[4937]: I0123 06:55:42.868802 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-679cdd6dd9-c2c84" Jan 23 06:55:42 crc kubenswrapper[4937]: I0123 06:55:42.929659 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 23 06:55:42 crc kubenswrapper[4937]: I0123 06:55:42.967953 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-679cdd6dd9-c2c84"] Jan 23 06:55:42 crc kubenswrapper[4937]: I0123 06:55:42.983504 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-679cdd6dd9-c2c84"] Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.480768 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.555358 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f77dc563-57f2-4c47-a627-98d15343173b-etc-machine-id\") pod \"f77dc563-57f2-4c47-a627-98d15343173b\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.555742 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-config-data\") pod \"f77dc563-57f2-4c47-a627-98d15343173b\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.555856 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-scripts\") pod \"f77dc563-57f2-4c47-a627-98d15343173b\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.555892 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g49k\" (UniqueName: \"kubernetes.io/projected/f77dc563-57f2-4c47-a627-98d15343173b-kube-api-access-7g49k\") pod \"f77dc563-57f2-4c47-a627-98d15343173b\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.555967 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-db-sync-config-data\") pod \"f77dc563-57f2-4c47-a627-98d15343173b\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.555990 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-combined-ca-bundle\") pod \"f77dc563-57f2-4c47-a627-98d15343173b\" (UID: \"f77dc563-57f2-4c47-a627-98d15343173b\") " Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.557293 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f77dc563-57f2-4c47-a627-98d15343173b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f77dc563-57f2-4c47-a627-98d15343173b" (UID: "f77dc563-57f2-4c47-a627-98d15343173b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.562787 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-scripts" (OuterVolumeSpecName: "scripts") pod "f77dc563-57f2-4c47-a627-98d15343173b" (UID: "f77dc563-57f2-4c47-a627-98d15343173b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.562964 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f77dc563-57f2-4c47-a627-98d15343173b-kube-api-access-7g49k" (OuterVolumeSpecName: "kube-api-access-7g49k") pod "f77dc563-57f2-4c47-a627-98d15343173b" (UID: "f77dc563-57f2-4c47-a627-98d15343173b"). InnerVolumeSpecName "kube-api-access-7g49k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.581863 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f77dc563-57f2-4c47-a627-98d15343173b" (UID: "f77dc563-57f2-4c47-a627-98d15343173b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.602239 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f77dc563-57f2-4c47-a627-98d15343173b" (UID: "f77dc563-57f2-4c47-a627-98d15343173b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.628836 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-config-data" (OuterVolumeSpecName: "config-data") pod "f77dc563-57f2-4c47-a627-98d15343173b" (UID: "f77dc563-57f2-4c47-a627-98d15343173b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.658550 4937 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f77dc563-57f2-4c47-a627-98d15343173b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.658581 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.658604 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.658614 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g49k\" (UniqueName: \"kubernetes.io/projected/f77dc563-57f2-4c47-a627-98d15343173b-kube-api-access-7g49k\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.658624 4937 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.658632 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f77dc563-57f2-4c47-a627-98d15343173b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.885277 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zjhvz" event={"ID":"f77dc563-57f2-4c47-a627-98d15343173b","Type":"ContainerDied","Data":"faa4e6e118e77d5800e6dd9fc249731486dcea0bab345bb088f46dddba5f9fbf"} Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.885331 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faa4e6e118e77d5800e6dd9fc249731486dcea0bab345bb088f46dddba5f9fbf" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.885408 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zjhvz" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.890938 4937 generic.go:334] "Generic (PLEG): container finished" podID="d82e186e-8995-43ff-a65f-a1918f8495bf" containerID="06a4d7e08f84c2ae242fe9e1f6c6c8470b0636295885dbd9e74b594ba5dda612" exitCode=0 Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.891132 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d82e186e-8995-43ff-a65f-a1918f8495bf","Type":"ContainerDied","Data":"06a4d7e08f84c2ae242fe9e1f6c6c8470b0636295885dbd9e74b594ba5dda612"} Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.959580 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 06:55:43 crc kubenswrapper[4937]: E0123 06:55:43.960068 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77dc563-57f2-4c47-a627-98d15343173b" containerName="cinder-db-sync" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.960084 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77dc563-57f2-4c47-a627-98d15343173b" containerName="cinder-db-sync" Jan 23 06:55:43 crc kubenswrapper[4937]: E0123 06:55:43.960122 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed40d957-d6c3-48f4-9255-576482c38569" containerName="horizon-log" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.960130 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed40d957-d6c3-48f4-9255-576482c38569" containerName="horizon-log" Jan 23 06:55:43 crc kubenswrapper[4937]: E0123 06:55:43.960143 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed40d957-d6c3-48f4-9255-576482c38569" containerName="horizon" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.960151 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed40d957-d6c3-48f4-9255-576482c38569" containerName="horizon" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.960372 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed40d957-d6c3-48f4-9255-576482c38569" containerName="horizon" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.960388 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed40d957-d6c3-48f4-9255-576482c38569" containerName="horizon-log" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.960405 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="f77dc563-57f2-4c47-a627-98d15343173b" containerName="cinder-db-sync" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.961840 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.970938 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bxfcj" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.971236 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.971392 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.971542 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 06:55:43 crc kubenswrapper[4937]: I0123 06:55:43.980549 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.039310 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-df4b78dcc-w6snw"] Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.065212 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-config-data\") pod \"cinder-scheduler-0\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.065296 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-scripts\") pod \"cinder-scheduler-0\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.065539 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.081799 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh7wl\" (UniqueName: \"kubernetes.io/projected/046dfbeb-51d7-4486-8465-9b93d542feee-kube-api-access-dh7wl\") pod \"cinder-scheduler-0\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.081904 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/046dfbeb-51d7-4486-8465-9b93d542feee-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.081946 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.110707 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6655dbbccf-n46t7"] Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.112826 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.125721 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6655dbbccf-n46t7"] Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.184622 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.184685 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-ovsdbserver-sb\") pod \"dnsmasq-dns-6655dbbccf-n46t7\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.184709 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh7wl\" (UniqueName: \"kubernetes.io/projected/046dfbeb-51d7-4486-8465-9b93d542feee-kube-api-access-dh7wl\") pod \"cinder-scheduler-0\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.184733 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-config\") pod \"dnsmasq-dns-6655dbbccf-n46t7\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.184749 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-dns-svc\") pod \"dnsmasq-dns-6655dbbccf-n46t7\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.184775 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/046dfbeb-51d7-4486-8465-9b93d542feee-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.184798 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.184863 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-dns-swift-storage-0\") pod \"dnsmasq-dns-6655dbbccf-n46t7\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.184884 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-config-data\") pod \"cinder-scheduler-0\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.184924 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-ovsdbserver-nb\") pod \"dnsmasq-dns-6655dbbccf-n46t7\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.184941 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-scripts\") pod \"cinder-scheduler-0\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.184981 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x5qr\" (UniqueName: \"kubernetes.io/projected/6181e679-083a-49d3-922d-b97ee0085b8d-kube-api-access-9x5qr\") pod \"dnsmasq-dns-6655dbbccf-n46t7\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.187190 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/046dfbeb-51d7-4486-8465-9b93d542feee-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.193134 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-scripts\") pod \"cinder-scheduler-0\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.201691 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-config-data\") pod \"cinder-scheduler-0\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.210008 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.213352 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh7wl\" (UniqueName: \"kubernetes.io/projected/046dfbeb-51d7-4486-8465-9b93d542feee-kube-api-access-dh7wl\") pod \"cinder-scheduler-0\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.220108 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.230642 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-8676986cc8-dkgvq" podUID="d71c9df1-567e-4dd0-be98-fb63b23ebca7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.288102 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-dns-swift-storage-0\") pod \"dnsmasq-dns-6655dbbccf-n46t7\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.288406 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-ovsdbserver-nb\") pod \"dnsmasq-dns-6655dbbccf-n46t7\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.288631 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x5qr\" (UniqueName: \"kubernetes.io/projected/6181e679-083a-49d3-922d-b97ee0085b8d-kube-api-access-9x5qr\") pod \"dnsmasq-dns-6655dbbccf-n46t7\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.288818 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-ovsdbserver-sb\") pod \"dnsmasq-dns-6655dbbccf-n46t7\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.293409 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-config\") pod \"dnsmasq-dns-6655dbbccf-n46t7\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.293525 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-dns-svc\") pod \"dnsmasq-dns-6655dbbccf-n46t7\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.294708 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-dns-svc\") pod \"dnsmasq-dns-6655dbbccf-n46t7\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.295520 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-dns-swift-storage-0\") pod \"dnsmasq-dns-6655dbbccf-n46t7\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.299154 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-ovsdbserver-sb\") pod \"dnsmasq-dns-6655dbbccf-n46t7\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.300393 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-config\") pod \"dnsmasq-dns-6655dbbccf-n46t7\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.300743 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.301744 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-ovsdbserver-nb\") pod \"dnsmasq-dns-6655dbbccf-n46t7\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.315719 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.317747 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.327000 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.332807 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x5qr\" (UniqueName: \"kubernetes.io/projected/6181e679-083a-49d3-922d-b97ee0085b8d-kube-api-access-9x5qr\") pod \"dnsmasq-dns-6655dbbccf-n46t7\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.350177 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.395215 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b566c034-7853-41ca-ae1d-e0105f73eadf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.395311 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-scripts\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.395346 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b566c034-7853-41ca-ae1d-e0105f73eadf-logs\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.395381 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-config-data-custom\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.395477 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grqw2\" (UniqueName: \"kubernetes.io/projected/b566c034-7853-41ca-ae1d-e0105f73eadf-kube-api-access-grqw2\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.395519 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-config-data\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.395577 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.414637 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xgx9n" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.472651 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.496881 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2537508-9450-4931-b4a4-d87cfdaa4a77-combined-ca-bundle\") pod \"f2537508-9450-4931-b4a4-d87cfdaa4a77\" (UID: \"f2537508-9450-4931-b4a4-d87cfdaa4a77\") " Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.496937 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rwqs\" (UniqueName: \"kubernetes.io/projected/f2537508-9450-4931-b4a4-d87cfdaa4a77-kube-api-access-8rwqs\") pod \"f2537508-9450-4931-b4a4-d87cfdaa4a77\" (UID: \"f2537508-9450-4931-b4a4-d87cfdaa4a77\") " Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.497022 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f2537508-9450-4931-b4a4-d87cfdaa4a77-config\") pod \"f2537508-9450-4931-b4a4-d87cfdaa4a77\" (UID: \"f2537508-9450-4931-b4a4-d87cfdaa4a77\") " Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.497561 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grqw2\" (UniqueName: \"kubernetes.io/projected/b566c034-7853-41ca-ae1d-e0105f73eadf-kube-api-access-grqw2\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.497616 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-config-data\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.498207 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.498290 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b566c034-7853-41ca-ae1d-e0105f73eadf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.498362 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-scripts\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.498394 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b566c034-7853-41ca-ae1d-e0105f73eadf-logs\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.498425 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-config-data-custom\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.500066 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b566c034-7853-41ca-ae1d-e0105f73eadf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.502759 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.504334 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b566c034-7853-41ca-ae1d-e0105f73eadf-logs\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.505954 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-scripts\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.523310 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-config-data-custom\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.533888 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2537508-9450-4931-b4a4-d87cfdaa4a77-kube-api-access-8rwqs" (OuterVolumeSpecName: "kube-api-access-8rwqs") pod "f2537508-9450-4931-b4a4-d87cfdaa4a77" (UID: "f2537508-9450-4931-b4a4-d87cfdaa4a77"). InnerVolumeSpecName "kube-api-access-8rwqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.534857 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-config-data\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.539666 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grqw2\" (UniqueName: \"kubernetes.io/projected/b566c034-7853-41ca-ae1d-e0105f73eadf-kube-api-access-grqw2\") pod \"cinder-api-0\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.550671 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed40d957-d6c3-48f4-9255-576482c38569" path="/var/lib/kubelet/pods/ed40d957-d6c3-48f4-9255-576482c38569/volumes" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.557288 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2537508-9450-4931-b4a4-d87cfdaa4a77-config" (OuterVolumeSpecName: "config") pod "f2537508-9450-4931-b4a4-d87cfdaa4a77" (UID: "f2537508-9450-4931-b4a4-d87cfdaa4a77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.572468 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2537508-9450-4931-b4a4-d87cfdaa4a77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2537508-9450-4931-b4a4-d87cfdaa4a77" (UID: "f2537508-9450-4931-b4a4-d87cfdaa4a77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.600529 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2537508-9450-4931-b4a4-d87cfdaa4a77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.600565 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rwqs\" (UniqueName: \"kubernetes.io/projected/f2537508-9450-4931-b4a4-d87cfdaa4a77-kube-api-access-8rwqs\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.600577 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f2537508-9450-4931-b4a4-d87cfdaa4a77-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.683779 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.940236 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-xgx9n" event={"ID":"f2537508-9450-4931-b4a4-d87cfdaa4a77","Type":"ContainerDied","Data":"3fff7a39b181c8b11687bf4c6fe78cb5d2bce1346137f2a64e2cd836b4ddd499"} Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.940274 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fff7a39b181c8b11687bf4c6fe78cb5d2bce1346137f2a64e2cd836b4ddd499" Jan 23 06:55:44 crc kubenswrapper[4937]: I0123 06:55:44.940343 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-xgx9n" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.054519 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6655dbbccf-n46t7"] Jan 23 06:55:45 crc kubenswrapper[4937]: E0123 06:55:45.136406 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 23 06:55:45 crc kubenswrapper[4937]: E0123 06:55:45.156147 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.223079 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9c86dc6f7-zv9mp"] Jan 23 06:55:45 crc kubenswrapper[4937]: E0123 06:55:45.224135 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2537508-9450-4931-b4a4-d87cfdaa4a77" containerName="neutron-db-sync" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.224160 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2537508-9450-4931-b4a4-d87cfdaa4a77" containerName="neutron-db-sync" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.224771 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2537508-9450-4931-b4a4-d87cfdaa4a77" containerName="neutron-db-sync" Jan 23 06:55:45 crc kubenswrapper[4937]: E0123 06:55:45.247914 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 23 06:55:45 crc kubenswrapper[4937]: E0123 06:55:45.248302 4937 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="51ed91bb-3033-4e77-8378-6cbdb382dc98" containerName="watcher-applier" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.250761 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.352426 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9c86dc6f7-zv9mp"] Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.378649 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6865b49f4-bgj55"] Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.391633 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-config\") pod \"dnsmasq-dns-9c86dc6f7-zv9mp\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.391676 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc2n5\" (UniqueName: \"kubernetes.io/projected/102f97ec-9a78-4b42-8fbf-4ece2671905e-kube-api-access-bc2n5\") pod \"dnsmasq-dns-9c86dc6f7-zv9mp\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.391704 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-dns-swift-storage-0\") pod \"dnsmasq-dns-9c86dc6f7-zv9mp\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.391734 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-dns-svc\") pod \"dnsmasq-dns-9c86dc6f7-zv9mp\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.391787 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-ovsdbserver-sb\") pod \"dnsmasq-dns-9c86dc6f7-zv9mp\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.391840 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-ovsdbserver-nb\") pod \"dnsmasq-dns-9c86dc6f7-zv9mp\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.393074 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.400000 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-8jg5b" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.400244 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.409232 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.409499 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.426628 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6865b49f4-bgj55"] Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.485679 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.495586 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-combined-ca-bundle\") pod \"neutron-6865b49f4-bgj55\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.495646 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-ovsdbserver-nb\") pod \"dnsmasq-dns-9c86dc6f7-zv9mp\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.495724 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-config\") pod \"dnsmasq-dns-9c86dc6f7-zv9mp\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.495761 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc2n5\" (UniqueName: \"kubernetes.io/projected/102f97ec-9a78-4b42-8fbf-4ece2671905e-kube-api-access-bc2n5\") pod \"dnsmasq-dns-9c86dc6f7-zv9mp\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.495779 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-dns-swift-storage-0\") pod \"dnsmasq-dns-9c86dc6f7-zv9mp\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.495823 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-dns-svc\") pod \"dnsmasq-dns-9c86dc6f7-zv9mp\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.495847 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-config\") pod \"neutron-6865b49f4-bgj55\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.495871 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-httpd-config\") pod \"neutron-6865b49f4-bgj55\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.495901 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqj9d\" (UniqueName: \"kubernetes.io/projected/226f524e-e956-468f-998d-693d51ec48a0-kube-api-access-jqj9d\") pod \"neutron-6865b49f4-bgj55\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.495931 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-ovsdbserver-sb\") pod \"dnsmasq-dns-9c86dc6f7-zv9mp\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.495955 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-ovndb-tls-certs\") pod \"neutron-6865b49f4-bgj55\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.497118 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-config\") pod \"dnsmasq-dns-9c86dc6f7-zv9mp\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.501453 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-dns-swift-storage-0\") pod \"dnsmasq-dns-9c86dc6f7-zv9mp\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.502655 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-ovsdbserver-sb\") pod \"dnsmasq-dns-9c86dc6f7-zv9mp\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.507262 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-dns-svc\") pod \"dnsmasq-dns-9c86dc6f7-zv9mp\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.509768 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-ovsdbserver-nb\") pod \"dnsmasq-dns-9c86dc6f7-zv9mp\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.518174 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6655dbbccf-n46t7"] Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.529334 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc2n5\" (UniqueName: \"kubernetes.io/projected/102f97ec-9a78-4b42-8fbf-4ece2671905e-kube-api-access-bc2n5\") pod \"dnsmasq-dns-9c86dc6f7-zv9mp\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.596990 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-config\") pod \"neutron-6865b49f4-bgj55\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.597058 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-httpd-config\") pod \"neutron-6865b49f4-bgj55\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.597088 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqj9d\" (UniqueName: \"kubernetes.io/projected/226f524e-e956-468f-998d-693d51ec48a0-kube-api-access-jqj9d\") pod \"neutron-6865b49f4-bgj55\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.597139 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-ovndb-tls-certs\") pod \"neutron-6865b49f4-bgj55\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.597194 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-combined-ca-bundle\") pod \"neutron-6865b49f4-bgj55\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.601215 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-combined-ca-bundle\") pod \"neutron-6865b49f4-bgj55\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.602967 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-ovndb-tls-certs\") pod \"neutron-6865b49f4-bgj55\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.610171 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-httpd-config\") pod \"neutron-6865b49f4-bgj55\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.615224 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-config\") pod \"neutron-6865b49f4-bgj55\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.616496 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.626075 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqj9d\" (UniqueName: \"kubernetes.io/projected/226f524e-e956-468f-998d-693d51ec48a0-kube-api-access-jqj9d\") pod \"neutron-6865b49f4-bgj55\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.860325 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.912604 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.956558 4937 generic.go:334] "Generic (PLEG): container finished" podID="438bef39-6283-4b17-b551-74f127660dbd" containerID="3f56ab8a8e7d0455ce48848279cc49245b73910866cf9d81886ac5f4cd16962a" exitCode=1 Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.956636 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"438bef39-6283-4b17-b551-74f127660dbd","Type":"ContainerDied","Data":"3f56ab8a8e7d0455ce48848279cc49245b73910866cf9d81886ac5f4cd16962a"} Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.956687 4937 scope.go:117] "RemoveContainer" containerID="0a21ed04a6c7b07083eed48e0a769a5c3eed3ae3e4300abc645e2cf2eb2acc8f" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.957441 4937 scope.go:117] "RemoveContainer" containerID="3f56ab8a8e7d0455ce48848279cc49245b73910866cf9d81886ac5f4cd16962a" Jan 23 06:55:45 crc kubenswrapper[4937]: E0123 06:55:45.957731 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(438bef39-6283-4b17-b551-74f127660dbd)\"" pod="openstack/watcher-decision-engine-0" podUID="438bef39-6283-4b17-b551-74f127660dbd" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.971177 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-647886d85c-p2mdd" event={"ID":"f7d32607-f131-4998-b179-f60612068c4a","Type":"ContainerStarted","Data":"66cd1cbbb318fd9edacdfa3bb7cc5b8e5375d5f9e2ce782b506b82b3afed260e"} Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.973040 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b566c034-7853-41ca-ae1d-e0105f73eadf","Type":"ContainerStarted","Data":"cc5f62d58b429e223ced01c29f11fc99bf17f966af06d7fa3dd6d930190f36e1"} Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.974381 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" event={"ID":"b1b9c727-287f-4b30-98d8-1706ca360e73","Type":"ContainerStarted","Data":"680289b46e0dde7c7edea9033e0d6b5f6e11e54001b88a22dec66c566e08baa9"} Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.976017 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5647cc4676-mx7tx" event={"ID":"e27f321d-6357-47df-a057-31c9036f8ec4","Type":"ContainerStarted","Data":"85e7cc946a04e16ef5e3a9a6efdeff68a8d6dc0943a2a1fb6008655038676907"} Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.976928 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.976955 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.992663 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" event={"ID":"6181e679-083a-49d3-922d-b97ee0085b8d","Type":"ContainerStarted","Data":"9630cf6f21f73c45bdb117906fe049be6175ee94c47603fe38c677ce9bd4c1d8"} Jan 23 06:55:45 crc kubenswrapper[4937]: I0123 06:55:45.994483 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"046dfbeb-51d7-4486-8465-9b93d542feee","Type":"ContainerStarted","Data":"e8798b06548f1cdb58e32a3d724053308b0d10a907b8cbbc33632c59aa00795d"} Jan 23 06:55:46 crc kubenswrapper[4937]: I0123 06:55:46.018877 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5647cc4676-mx7tx" podStartSLOduration=5.018857534 podStartE2EDuration="5.018857534s" podCreationTimestamp="2026-01-23 06:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:55:45.999580862 +0000 UTC m=+1345.803347515" watchObservedRunningTime="2026-01-23 06:55:46.018857534 +0000 UTC m=+1345.822624187" Jan 23 06:55:46 crc kubenswrapper[4937]: I0123 06:55:46.827133 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9c86dc6f7-zv9mp"] Jan 23 06:55:46 crc kubenswrapper[4937]: W0123 06:55:46.873759 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod102f97ec_9a78_4b42_8fbf_4ece2671905e.slice/crio-22803e91ef510242cba01e4feb549e807ba6e9e40f529d64333e3b166eab89f7 WatchSource:0}: Error finding container 22803e91ef510242cba01e4feb549e807ba6e9e40f529d64333e3b166eab89f7: Status 404 returned error can't find the container with id 22803e91ef510242cba01e4feb549e807ba6e9e40f529d64333e3b166eab89f7 Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.044639 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-647886d85c-p2mdd" event={"ID":"f7d32607-f131-4998-b179-f60612068c4a","Type":"ContainerStarted","Data":"409c1e83994c65e0f6e5ddb5cf2599efe96e5c72b51b96abd88e7ae612350de7"} Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.048187 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" event={"ID":"b1b9c727-287f-4b30-98d8-1706ca360e73","Type":"ContainerStarted","Data":"d2e0a1f6602ca01008ebb16a1207aa2472ffe45313f1526d6ad15d987f11460e"} Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.058515 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" event={"ID":"102f97ec-9a78-4b42-8fbf-4ece2671905e","Type":"ContainerStarted","Data":"22803e91ef510242cba01e4feb549e807ba6e9e40f529d64333e3b166eab89f7"} Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.064524 4937 generic.go:334] "Generic (PLEG): container finished" podID="6181e679-083a-49d3-922d-b97ee0085b8d" containerID="b7ee51834b16764131c60455455ae03307f0d6a47f1037c9661d016ca7b49d09" exitCode=0 Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.064585 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" event={"ID":"6181e679-083a-49d3-922d-b97ee0085b8d","Type":"ContainerDied","Data":"b7ee51834b16764131c60455455ae03307f0d6a47f1037c9661d016ca7b49d09"} Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.087061 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" event={"ID":"5633b05a-b392-484d-90ed-4ff807c6f780","Type":"ContainerStarted","Data":"cdf6f6a2b9ab30bab70ef0fe16109591cf620ebc7405ff5a90b2d7aff930b3b4"} Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.087395 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.087086 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" podUID="5633b05a-b392-484d-90ed-4ff807c6f780" containerName="dnsmasq-dns" containerID="cri-o://cdf6f6a2b9ab30bab70ef0fe16109591cf620ebc7405ff5a90b2d7aff930b3b4" gracePeriod=10 Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.110864 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-647886d85c-p2mdd" podStartSLOduration=4.546342282 podStartE2EDuration="7.11084707s" podCreationTimestamp="2026-01-23 06:55:40 +0000 UTC" firstStartedPulling="2026-01-23 06:55:41.890656195 +0000 UTC m=+1341.694422858" lastFinishedPulling="2026-01-23 06:55:44.455160993 +0000 UTC m=+1344.258927646" observedRunningTime="2026-01-23 06:55:47.085063521 +0000 UTC m=+1346.888830204" watchObservedRunningTime="2026-01-23 06:55:47.11084707 +0000 UTC m=+1346.914613723" Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.129299 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7fb8d96c46-xq9qn" podStartSLOduration=4.663997009 podStartE2EDuration="7.129280609s" podCreationTimestamp="2026-01-23 06:55:40 +0000 UTC" firstStartedPulling="2026-01-23 06:55:41.989279037 +0000 UTC m=+1341.793045700" lastFinishedPulling="2026-01-23 06:55:44.454562637 +0000 UTC m=+1344.258329300" observedRunningTime="2026-01-23 06:55:47.110320425 +0000 UTC m=+1346.914087078" watchObservedRunningTime="2026-01-23 06:55:47.129280609 +0000 UTC m=+1346.933047262" Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.251142 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" podStartSLOduration=7.251109349 podStartE2EDuration="7.251109349s" podCreationTimestamp="2026-01-23 06:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:55:47.172135079 +0000 UTC m=+1346.975901742" watchObservedRunningTime="2026-01-23 06:55:47.251109349 +0000 UTC m=+1347.054876002" Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.721230 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.777528 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-config\") pod \"6181e679-083a-49d3-922d-b97ee0085b8d\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.777866 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-ovsdbserver-sb\") pod \"6181e679-083a-49d3-922d-b97ee0085b8d\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.777963 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-dns-svc\") pod \"6181e679-083a-49d3-922d-b97ee0085b8d\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.778045 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-dns-swift-storage-0\") pod \"6181e679-083a-49d3-922d-b97ee0085b8d\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.778075 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x5qr\" (UniqueName: \"kubernetes.io/projected/6181e679-083a-49d3-922d-b97ee0085b8d-kube-api-access-9x5qr\") pod \"6181e679-083a-49d3-922d-b97ee0085b8d\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.778247 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-ovsdbserver-nb\") pod \"6181e679-083a-49d3-922d-b97ee0085b8d\" (UID: \"6181e679-083a-49d3-922d-b97ee0085b8d\") " Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.872773 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6865b49f4-bgj55"] Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.876173 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6181e679-083a-49d3-922d-b97ee0085b8d-kube-api-access-9x5qr" (OuterVolumeSpecName: "kube-api-access-9x5qr") pod "6181e679-083a-49d3-922d-b97ee0085b8d" (UID: "6181e679-083a-49d3-922d-b97ee0085b8d"). InnerVolumeSpecName "kube-api-access-9x5qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.899864 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x5qr\" (UniqueName: \"kubernetes.io/projected/6181e679-083a-49d3-922d-b97ee0085b8d-kube-api-access-9x5qr\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.903325 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6181e679-083a-49d3-922d-b97ee0085b8d" (UID: "6181e679-083a-49d3-922d-b97ee0085b8d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.965682 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-config" (OuterVolumeSpecName: "config") pod "6181e679-083a-49d3-922d-b97ee0085b8d" (UID: "6181e679-083a-49d3-922d-b97ee0085b8d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:47 crc kubenswrapper[4937]: I0123 06:55:47.996223 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6181e679-083a-49d3-922d-b97ee0085b8d" (UID: "6181e679-083a-49d3-922d-b97ee0085b8d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.001965 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.002006 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.002020 4937 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.043321 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.050286 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6181e679-083a-49d3-922d-b97ee0085b8d" (UID: "6181e679-083a-49d3-922d-b97ee0085b8d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.056721 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.056772 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.057468 4937 scope.go:117] "RemoveContainer" containerID="3f56ab8a8e7d0455ce48848279cc49245b73910866cf9d81886ac5f4cd16962a" Jan 23 06:55:48 crc kubenswrapper[4937]: E0123 06:55:48.057847 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(438bef39-6283-4b17-b551-74f127660dbd)\"" pod="openstack/watcher-decision-engine-0" podUID="438bef39-6283-4b17-b551-74f127660dbd" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.100353 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6181e679-083a-49d3-922d-b97ee0085b8d" (UID: "6181e679-083a-49d3-922d-b97ee0085b8d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.101387 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.104140 4937 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.104180 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6181e679-083a-49d3-922d-b97ee0085b8d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.128343 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6865b49f4-bgj55" event={"ID":"226f524e-e956-468f-998d-693d51ec48a0","Type":"ContainerStarted","Data":"dbeba780e2e819e5718d3ef45d69fb439880c5628be76d89eda2da36a3c53ca6"} Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.155910 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"046dfbeb-51d7-4486-8465-9b93d542feee","Type":"ContainerStarted","Data":"ed1f03cd83e60a287f5af0c57f69f0ee3a5a50d64c9f718daa4f6f7b1b70d830"} Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.179680 4937 generic.go:334] "Generic (PLEG): container finished" podID="5633b05a-b392-484d-90ed-4ff807c6f780" containerID="cdf6f6a2b9ab30bab70ef0fe16109591cf620ebc7405ff5a90b2d7aff930b3b4" exitCode=0 Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.179998 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" event={"ID":"5633b05a-b392-484d-90ed-4ff807c6f780","Type":"ContainerDied","Data":"cdf6f6a2b9ab30bab70ef0fe16109591cf620ebc7405ff5a90b2d7aff930b3b4"} Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.180024 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" event={"ID":"5633b05a-b392-484d-90ed-4ff807c6f780","Type":"ContainerDied","Data":"6ac50232a514d769958eed4ff2adcfe2a72227222eb311bf7fe1bd2f7a54be1d"} Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.180041 4937 scope.go:117] "RemoveContainer" containerID="cdf6f6a2b9ab30bab70ef0fe16109591cf620ebc7405ff5a90b2d7aff930b3b4" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.180152 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df4b78dcc-w6snw" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.206766 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-dns-svc\") pod \"5633b05a-b392-484d-90ed-4ff807c6f780\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.206812 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-ovsdbserver-nb\") pod \"5633b05a-b392-484d-90ed-4ff807c6f780\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.206907 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-dns-swift-storage-0\") pod \"5633b05a-b392-484d-90ed-4ff807c6f780\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.206925 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86bfk\" (UniqueName: \"kubernetes.io/projected/5633b05a-b392-484d-90ed-4ff807c6f780-kube-api-access-86bfk\") pod \"5633b05a-b392-484d-90ed-4ff807c6f780\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.206979 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-config\") pod \"5633b05a-b392-484d-90ed-4ff807c6f780\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.207013 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-ovsdbserver-sb\") pod \"5633b05a-b392-484d-90ed-4ff807c6f780\" (UID: \"5633b05a-b392-484d-90ed-4ff807c6f780\") " Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.245654 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5633b05a-b392-484d-90ed-4ff807c6f780-kube-api-access-86bfk" (OuterVolumeSpecName: "kube-api-access-86bfk") pod "5633b05a-b392-484d-90ed-4ff807c6f780" (UID: "5633b05a-b392-484d-90ed-4ff807c6f780"). InnerVolumeSpecName "kube-api-access-86bfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.263141 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b566c034-7853-41ca-ae1d-e0105f73eadf","Type":"ContainerStarted","Data":"e76edf13fa241ebcdeddf03541eeb969ec8bf71cbe4a4c2d3e60165f58891823"} Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.308379 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86bfk\" (UniqueName: \"kubernetes.io/projected/5633b05a-b392-484d-90ed-4ff807c6f780-kube-api-access-86bfk\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.319878 4937 generic.go:334] "Generic (PLEG): container finished" podID="102f97ec-9a78-4b42-8fbf-4ece2671905e" containerID="a6cda470b094e213f2498eb3fca45cd251c1d3fd4296d924c0f9f7dd7e9ff3ea" exitCode=0 Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.319988 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" event={"ID":"102f97ec-9a78-4b42-8fbf-4ece2671905e","Type":"ContainerDied","Data":"a6cda470b094e213f2498eb3fca45cd251c1d3fd4296d924c0f9f7dd7e9ff3ea"} Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.356099 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" event={"ID":"6181e679-083a-49d3-922d-b97ee0085b8d","Type":"ContainerDied","Data":"9630cf6f21f73c45bdb117906fe049be6175ee94c47603fe38c677ce9bd4c1d8"} Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.356230 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6655dbbccf-n46t7" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.370246 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5633b05a-b392-484d-90ed-4ff807c6f780" (UID: "5633b05a-b392-484d-90ed-4ff807c6f780"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.399760 4937 scope.go:117] "RemoveContainer" containerID="4905265ec312ddf5ad355d2df925bc535a8924adc18af9afed1a9067d30fbc85" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.410567 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.422609 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-config" (OuterVolumeSpecName: "config") pod "5633b05a-b392-484d-90ed-4ff807c6f780" (UID: "5633b05a-b392-484d-90ed-4ff807c6f780"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.464172 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5633b05a-b392-484d-90ed-4ff807c6f780" (UID: "5633b05a-b392-484d-90ed-4ff807c6f780"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.479199 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5633b05a-b392-484d-90ed-4ff807c6f780" (UID: "5633b05a-b392-484d-90ed-4ff807c6f780"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.518373 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.518404 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.518413 4937 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.532872 4937 scope.go:117] "RemoveContainer" containerID="cdf6f6a2b9ab30bab70ef0fe16109591cf620ebc7405ff5a90b2d7aff930b3b4" Jan 23 06:55:48 crc kubenswrapper[4937]: E0123 06:55:48.533986 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdf6f6a2b9ab30bab70ef0fe16109591cf620ebc7405ff5a90b2d7aff930b3b4\": container with ID starting with cdf6f6a2b9ab30bab70ef0fe16109591cf620ebc7405ff5a90b2d7aff930b3b4 not found: ID does not exist" containerID="cdf6f6a2b9ab30bab70ef0fe16109591cf620ebc7405ff5a90b2d7aff930b3b4" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.534019 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdf6f6a2b9ab30bab70ef0fe16109591cf620ebc7405ff5a90b2d7aff930b3b4"} err="failed to get container status \"cdf6f6a2b9ab30bab70ef0fe16109591cf620ebc7405ff5a90b2d7aff930b3b4\": rpc error: code = NotFound desc = could not find container \"cdf6f6a2b9ab30bab70ef0fe16109591cf620ebc7405ff5a90b2d7aff930b3b4\": container with ID starting with cdf6f6a2b9ab30bab70ef0fe16109591cf620ebc7405ff5a90b2d7aff930b3b4 not found: ID does not exist" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.534044 4937 scope.go:117] "RemoveContainer" containerID="4905265ec312ddf5ad355d2df925bc535a8924adc18af9afed1a9067d30fbc85" Jan 23 06:55:48 crc kubenswrapper[4937]: E0123 06:55:48.554472 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4905265ec312ddf5ad355d2df925bc535a8924adc18af9afed1a9067d30fbc85\": container with ID starting with 4905265ec312ddf5ad355d2df925bc535a8924adc18af9afed1a9067d30fbc85 not found: ID does not exist" containerID="4905265ec312ddf5ad355d2df925bc535a8924adc18af9afed1a9067d30fbc85" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.554519 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4905265ec312ddf5ad355d2df925bc535a8924adc18af9afed1a9067d30fbc85"} err="failed to get container status \"4905265ec312ddf5ad355d2df925bc535a8924adc18af9afed1a9067d30fbc85\": rpc error: code = NotFound desc = could not find container \"4905265ec312ddf5ad355d2df925bc535a8924adc18af9afed1a9067d30fbc85\": container with ID starting with 4905265ec312ddf5ad355d2df925bc535a8924adc18af9afed1a9067d30fbc85 not found: ID does not exist" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.554548 4937 scope.go:117] "RemoveContainer" containerID="b7ee51834b16764131c60455455ae03307f0d6a47f1037c9661d016ca7b49d09" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.618199 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6655dbbccf-n46t7"] Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.618714 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6655dbbccf-n46t7"] Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.642044 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5633b05a-b392-484d-90ed-4ff807c6f780" (UID: "5633b05a-b392-484d-90ed-4ff807c6f780"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.721433 4937 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5633b05a-b392-484d-90ed-4ff807c6f780-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.967456 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-df4b78dcc-w6snw"] Jan 23 06:55:48 crc kubenswrapper[4937]: I0123 06:55:48.983372 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-df4b78dcc-w6snw"] Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.152497 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.340742 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ed91bb-3033-4e77-8378-6cbdb382dc98-logs\") pod \"51ed91bb-3033-4e77-8378-6cbdb382dc98\" (UID: \"51ed91bb-3033-4e77-8378-6cbdb382dc98\") " Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.341028 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ed91bb-3033-4e77-8378-6cbdb382dc98-config-data\") pod \"51ed91bb-3033-4e77-8378-6cbdb382dc98\" (UID: \"51ed91bb-3033-4e77-8378-6cbdb382dc98\") " Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.341127 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ed91bb-3033-4e77-8378-6cbdb382dc98-combined-ca-bundle\") pod \"51ed91bb-3033-4e77-8378-6cbdb382dc98\" (UID: \"51ed91bb-3033-4e77-8378-6cbdb382dc98\") " Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.341223 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfhb9\" (UniqueName: \"kubernetes.io/projected/51ed91bb-3033-4e77-8378-6cbdb382dc98-kube-api-access-zfhb9\") pod \"51ed91bb-3033-4e77-8378-6cbdb382dc98\" (UID: \"51ed91bb-3033-4e77-8378-6cbdb382dc98\") " Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.342189 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ed91bb-3033-4e77-8378-6cbdb382dc98-logs" (OuterVolumeSpecName: "logs") pod "51ed91bb-3033-4e77-8378-6cbdb382dc98" (UID: "51ed91bb-3033-4e77-8378-6cbdb382dc98"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.350737 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ed91bb-3033-4e77-8378-6cbdb382dc98-kube-api-access-zfhb9" (OuterVolumeSpecName: "kube-api-access-zfhb9") pod "51ed91bb-3033-4e77-8378-6cbdb382dc98" (UID: "51ed91bb-3033-4e77-8378-6cbdb382dc98"). InnerVolumeSpecName "kube-api-access-zfhb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.381752 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ed91bb-3033-4e77-8378-6cbdb382dc98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51ed91bb-3033-4e77-8378-6cbdb382dc98" (UID: "51ed91bb-3033-4e77-8378-6cbdb382dc98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.394818 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b566c034-7853-41ca-ae1d-e0105f73eadf","Type":"ContainerStarted","Data":"0ecaa150fc5fa2d4d6a1e88d5bca293ed188e878f9097cedd2ff9690208f593f"} Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.394979 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b566c034-7853-41ca-ae1d-e0105f73eadf" containerName="cinder-api-log" containerID="cri-o://e76edf13fa241ebcdeddf03541eeb969ec8bf71cbe4a4c2d3e60165f58891823" gracePeriod=30 Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.395066 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.395382 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b566c034-7853-41ca-ae1d-e0105f73eadf" containerName="cinder-api" containerID="cri-o://0ecaa150fc5fa2d4d6a1e88d5bca293ed188e878f9097cedd2ff9690208f593f" gracePeriod=30 Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.399339 4937 generic.go:334] "Generic (PLEG): container finished" podID="51ed91bb-3033-4e77-8378-6cbdb382dc98" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" exitCode=137 Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.399469 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.399581 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"51ed91bb-3033-4e77-8378-6cbdb382dc98","Type":"ContainerDied","Data":"b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01"} Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.399658 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"51ed91bb-3033-4e77-8378-6cbdb382dc98","Type":"ContainerDied","Data":"98ca4777aabc721acd069db2120f6c9fbb94953c11bfe70c3b11ea488ff76e31"} Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.399683 4937 scope.go:117] "RemoveContainer" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.425241 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.425225703 podStartE2EDuration="5.425225703s" podCreationTimestamp="2026-01-23 06:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:55:49.419040605 +0000 UTC m=+1349.222807258" watchObservedRunningTime="2026-01-23 06:55:49.425225703 +0000 UTC m=+1349.228992346" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.426713 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" event={"ID":"102f97ec-9a78-4b42-8fbf-4ece2671905e","Type":"ContainerStarted","Data":"c7f213f367ca15e61f13e7b25bc9b932e54424a196a8a8633af88802ff62f8a8"} Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.426795 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.444903 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ed91bb-3033-4e77-8378-6cbdb382dc98-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.444922 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ed91bb-3033-4e77-8378-6cbdb382dc98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.444931 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfhb9\" (UniqueName: \"kubernetes.io/projected/51ed91bb-3033-4e77-8378-6cbdb382dc98-kube-api-access-zfhb9\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.447645 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6865b49f4-bgj55" event={"ID":"226f524e-e956-468f-998d-693d51ec48a0","Type":"ContainerStarted","Data":"4cc4b770da1518b11168b354becc611e3693802c8d981e2347509d2b8206c433"} Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.447692 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6865b49f4-bgj55" event={"ID":"226f524e-e956-468f-998d-693d51ec48a0","Type":"ContainerStarted","Data":"499459158d7beb21ac0fee9cb57ca6ca053ee554c6a8dec1c2799f991c968097"} Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.448277 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.472848 4937 scope.go:117] "RemoveContainer" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" Jan 23 06:55:49 crc kubenswrapper[4937]: E0123 06:55:49.473364 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01\": container with ID starting with b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01 not found: ID does not exist" containerID="b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.476484 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01"} err="failed to get container status \"b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01\": rpc error: code = NotFound desc = could not find container \"b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01\": container with ID starting with b2401b9729f9c31c87c695ea0044c816c8376e5c91db5ba91983d3eca13c7c01 not found: ID does not exist" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.477066 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" podStartSLOduration=4.477054906 podStartE2EDuration="4.477054906s" podCreationTimestamp="2026-01-23 06:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:55:49.4558028 +0000 UTC m=+1349.259569453" watchObservedRunningTime="2026-01-23 06:55:49.477054906 +0000 UTC m=+1349.280821559" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.491977 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ed91bb-3033-4e77-8378-6cbdb382dc98-config-data" (OuterVolumeSpecName: "config-data") pod "51ed91bb-3033-4e77-8378-6cbdb382dc98" (UID: "51ed91bb-3033-4e77-8378-6cbdb382dc98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.493134 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6865b49f4-bgj55" podStartSLOduration=4.493108371 podStartE2EDuration="4.493108371s" podCreationTimestamp="2026-01-23 06:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:55:49.477626961 +0000 UTC m=+1349.281393614" watchObservedRunningTime="2026-01-23 06:55:49.493108371 +0000 UTC m=+1349.296875044" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.546256 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ed91bb-3033-4e77-8378-6cbdb382dc98-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.736272 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.746618 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.757338 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 23 06:55:49 crc kubenswrapper[4937]: E0123 06:55:49.757706 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5633b05a-b392-484d-90ed-4ff807c6f780" containerName="init" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.757723 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5633b05a-b392-484d-90ed-4ff807c6f780" containerName="init" Jan 23 06:55:49 crc kubenswrapper[4937]: E0123 06:55:49.757741 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6181e679-083a-49d3-922d-b97ee0085b8d" containerName="init" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.757748 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="6181e679-083a-49d3-922d-b97ee0085b8d" containerName="init" Jan 23 06:55:49 crc kubenswrapper[4937]: E0123 06:55:49.757760 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5633b05a-b392-484d-90ed-4ff807c6f780" containerName="dnsmasq-dns" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.757766 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5633b05a-b392-484d-90ed-4ff807c6f780" containerName="dnsmasq-dns" Jan 23 06:55:49 crc kubenswrapper[4937]: E0123 06:55:49.757787 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ed91bb-3033-4e77-8378-6cbdb382dc98" containerName="watcher-applier" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.757793 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ed91bb-3033-4e77-8378-6cbdb382dc98" containerName="watcher-applier" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.757982 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ed91bb-3033-4e77-8378-6cbdb382dc98" containerName="watcher-applier" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.758008 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="5633b05a-b392-484d-90ed-4ff807c6f780" containerName="dnsmasq-dns" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.758021 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="6181e679-083a-49d3-922d-b97ee0085b8d" containerName="init" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.758584 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.765948 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.796300 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.851366 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/392cdd41-36d8-4d25-b7af-2b1b3f42e144-config-data\") pod \"watcher-applier-0\" (UID: \"392cdd41-36d8-4d25-b7af-2b1b3f42e144\") " pod="openstack/watcher-applier-0" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.851555 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mch9p\" (UniqueName: \"kubernetes.io/projected/392cdd41-36d8-4d25-b7af-2b1b3f42e144-kube-api-access-mch9p\") pod \"watcher-applier-0\" (UID: \"392cdd41-36d8-4d25-b7af-2b1b3f42e144\") " pod="openstack/watcher-applier-0" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.851657 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392cdd41-36d8-4d25-b7af-2b1b3f42e144-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"392cdd41-36d8-4d25-b7af-2b1b3f42e144\") " pod="openstack/watcher-applier-0" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.851684 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/392cdd41-36d8-4d25-b7af-2b1b3f42e144-logs\") pod \"watcher-applier-0\" (UID: \"392cdd41-36d8-4d25-b7af-2b1b3f42e144\") " pod="openstack/watcher-applier-0" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.953217 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/392cdd41-36d8-4d25-b7af-2b1b3f42e144-config-data\") pod \"watcher-applier-0\" (UID: \"392cdd41-36d8-4d25-b7af-2b1b3f42e144\") " pod="openstack/watcher-applier-0" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.953383 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mch9p\" (UniqueName: \"kubernetes.io/projected/392cdd41-36d8-4d25-b7af-2b1b3f42e144-kube-api-access-mch9p\") pod \"watcher-applier-0\" (UID: \"392cdd41-36d8-4d25-b7af-2b1b3f42e144\") " pod="openstack/watcher-applier-0" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.953458 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392cdd41-36d8-4d25-b7af-2b1b3f42e144-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"392cdd41-36d8-4d25-b7af-2b1b3f42e144\") " pod="openstack/watcher-applier-0" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.953483 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/392cdd41-36d8-4d25-b7af-2b1b3f42e144-logs\") pod \"watcher-applier-0\" (UID: \"392cdd41-36d8-4d25-b7af-2b1b3f42e144\") " pod="openstack/watcher-applier-0" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.954035 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/392cdd41-36d8-4d25-b7af-2b1b3f42e144-logs\") pod \"watcher-applier-0\" (UID: \"392cdd41-36d8-4d25-b7af-2b1b3f42e144\") " pod="openstack/watcher-applier-0" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.961981 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392cdd41-36d8-4d25-b7af-2b1b3f42e144-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"392cdd41-36d8-4d25-b7af-2b1b3f42e144\") " pod="openstack/watcher-applier-0" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.962354 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/392cdd41-36d8-4d25-b7af-2b1b3f42e144-config-data\") pod \"watcher-applier-0\" (UID: \"392cdd41-36d8-4d25-b7af-2b1b3f42e144\") " pod="openstack/watcher-applier-0" Jan 23 06:55:49 crc kubenswrapper[4937]: I0123 06:55:49.975145 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mch9p\" (UniqueName: \"kubernetes.io/projected/392cdd41-36d8-4d25-b7af-2b1b3f42e144-kube-api-access-mch9p\") pod \"watcher-applier-0\" (UID: \"392cdd41-36d8-4d25-b7af-2b1b3f42e144\") " pod="openstack/watcher-applier-0" Jan 23 06:55:50 crc kubenswrapper[4937]: I0123 06:55:50.129093 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 23 06:55:50 crc kubenswrapper[4937]: I0123 06:55:50.457902 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"046dfbeb-51d7-4486-8465-9b93d542feee","Type":"ContainerStarted","Data":"974fa3b405261f55fc06f7d710284fcc6b8344ffefbeda968d8bf94d72a4daff"} Jan 23 06:55:50 crc kubenswrapper[4937]: I0123 06:55:50.472764 4937 generic.go:334] "Generic (PLEG): container finished" podID="b566c034-7853-41ca-ae1d-e0105f73eadf" containerID="e76edf13fa241ebcdeddf03541eeb969ec8bf71cbe4a4c2d3e60165f58891823" exitCode=143 Jan 23 06:55:50 crc kubenswrapper[4937]: I0123 06:55:50.472865 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b566c034-7853-41ca-ae1d-e0105f73eadf","Type":"ContainerDied","Data":"e76edf13fa241ebcdeddf03541eeb969ec8bf71cbe4a4c2d3e60165f58891823"} Jan 23 06:55:50 crc kubenswrapper[4937]: I0123 06:55:50.542046 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ed91bb-3033-4e77-8378-6cbdb382dc98" path="/var/lib/kubelet/pods/51ed91bb-3033-4e77-8378-6cbdb382dc98/volumes" Jan 23 06:55:50 crc kubenswrapper[4937]: I0123 06:55:50.542553 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5633b05a-b392-484d-90ed-4ff807c6f780" path="/var/lib/kubelet/pods/5633b05a-b392-484d-90ed-4ff807c6f780/volumes" Jan 23 06:55:50 crc kubenswrapper[4937]: I0123 06:55:50.543124 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6181e679-083a-49d3-922d-b97ee0085b8d" path="/var/lib/kubelet/pods/6181e679-083a-49d3-922d-b97ee0085b8d/volumes" Jan 23 06:55:50 crc kubenswrapper[4937]: I0123 06:55:50.628529 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6ffdfcccc5-xjn5f" Jan 23 06:55:50 crc kubenswrapper[4937]: I0123 06:55:50.662752 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.206702888 podStartE2EDuration="7.662731128s" podCreationTimestamp="2026-01-23 06:55:43 +0000 UTC" firstStartedPulling="2026-01-23 06:55:45.735790738 +0000 UTC m=+1345.539557391" lastFinishedPulling="2026-01-23 06:55:46.191818978 +0000 UTC m=+1345.995585631" observedRunningTime="2026-01-23 06:55:50.494495642 +0000 UTC m=+1350.298262295" watchObservedRunningTime="2026-01-23 06:55:50.662731128 +0000 UTC m=+1350.466497771" Jan 23 06:55:50 crc kubenswrapper[4937]: I0123 06:55:50.677584 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 23 06:55:50 crc kubenswrapper[4937]: I0123 06:55:50.983701 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:51 crc kubenswrapper[4937]: I0123 06:55:51.487180 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"392cdd41-36d8-4d25-b7af-2b1b3f42e144","Type":"ContainerStarted","Data":"b712df534e1262e4c6c8c213835489913d0c17dddd20d4e5b8495d2b16a960d5"} Jan 23 06:55:51 crc kubenswrapper[4937]: I0123 06:55:51.487224 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"392cdd41-36d8-4d25-b7af-2b1b3f42e144","Type":"ContainerStarted","Data":"2b84e2745424b5d03056680ecb1721439b51182fb53349346ddc20c16866c9c0"} Jan 23 06:55:51 crc kubenswrapper[4937]: I0123 06:55:51.529113 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.529092313 podStartE2EDuration="2.529092313s" podCreationTimestamp="2026-01-23 06:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:55:51.508452375 +0000 UTC m=+1351.312219048" watchObservedRunningTime="2026-01-23 06:55:51.529092313 +0000 UTC m=+1351.332858976" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.328402 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6845797bf7-lmcfd"] Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.348522 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.352110 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.352294 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.371494 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6845797bf7-lmcfd"] Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.456627 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/828c40cb-f3e5-48ce-ab59-f201b3e46f35-internal-tls-certs\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.456670 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/828c40cb-f3e5-48ce-ab59-f201b3e46f35-httpd-config\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.456711 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzkns\" (UniqueName: \"kubernetes.io/projected/828c40cb-f3e5-48ce-ab59-f201b3e46f35-kube-api-access-tzkns\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.456767 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/828c40cb-f3e5-48ce-ab59-f201b3e46f35-ovndb-tls-certs\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.456832 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/828c40cb-f3e5-48ce-ab59-f201b3e46f35-public-tls-certs\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.456863 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/828c40cb-f3e5-48ce-ab59-f201b3e46f35-config\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.456904 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828c40cb-f3e5-48ce-ab59-f201b3e46f35-combined-ca-bundle\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.558619 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/828c40cb-f3e5-48ce-ab59-f201b3e46f35-public-tls-certs\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.558678 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/828c40cb-f3e5-48ce-ab59-f201b3e46f35-config\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.558727 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828c40cb-f3e5-48ce-ab59-f201b3e46f35-combined-ca-bundle\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.558760 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/828c40cb-f3e5-48ce-ab59-f201b3e46f35-internal-tls-certs\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.558777 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/828c40cb-f3e5-48ce-ab59-f201b3e46f35-httpd-config\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.558805 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzkns\" (UniqueName: \"kubernetes.io/projected/828c40cb-f3e5-48ce-ab59-f201b3e46f35-kube-api-access-tzkns\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.558843 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/828c40cb-f3e5-48ce-ab59-f201b3e46f35-ovndb-tls-certs\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.581577 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/828c40cb-f3e5-48ce-ab59-f201b3e46f35-ovndb-tls-certs\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.584428 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/828c40cb-f3e5-48ce-ab59-f201b3e46f35-internal-tls-certs\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.587678 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/828c40cb-f3e5-48ce-ab59-f201b3e46f35-public-tls-certs\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.588435 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/828c40cb-f3e5-48ce-ab59-f201b3e46f35-combined-ca-bundle\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.593556 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/828c40cb-f3e5-48ce-ab59-f201b3e46f35-config\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.594976 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/828c40cb-f3e5-48ce-ab59-f201b3e46f35-httpd-config\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.632499 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzkns\" (UniqueName: \"kubernetes.io/projected/828c40cb-f3e5-48ce-ab59-f201b3e46f35-kube-api-access-tzkns\") pod \"neutron-6845797bf7-lmcfd\" (UID: \"828c40cb-f3e5-48ce-ab59-f201b3e46f35\") " pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.692612 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.803823 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-57c6ff4958-kv8rb"] Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.805798 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.815353 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.815705 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.819108 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57c6ff4958-kv8rb"] Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.966153 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jmnn\" (UniqueName: \"kubernetes.io/projected/f894d76a-8583-49b5-b88d-19b8bf52081d-kube-api-access-5jmnn\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.966214 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f894d76a-8583-49b5-b88d-19b8bf52081d-public-tls-certs\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.966250 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f894d76a-8583-49b5-b88d-19b8bf52081d-config-data-custom\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.966265 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f894d76a-8583-49b5-b88d-19b8bf52081d-config-data\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.966307 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f894d76a-8583-49b5-b88d-19b8bf52081d-internal-tls-certs\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.966331 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f894d76a-8583-49b5-b88d-19b8bf52081d-logs\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:52 crc kubenswrapper[4937]: I0123 06:55:52.966359 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f894d76a-8583-49b5-b88d-19b8bf52081d-combined-ca-bundle\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:53 crc kubenswrapper[4937]: I0123 06:55:53.067861 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f894d76a-8583-49b5-b88d-19b8bf52081d-public-tls-certs\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:53 crc kubenswrapper[4937]: I0123 06:55:53.068175 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f894d76a-8583-49b5-b88d-19b8bf52081d-config-data-custom\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:53 crc kubenswrapper[4937]: I0123 06:55:53.068192 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f894d76a-8583-49b5-b88d-19b8bf52081d-config-data\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:53 crc kubenswrapper[4937]: I0123 06:55:53.068232 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f894d76a-8583-49b5-b88d-19b8bf52081d-internal-tls-certs\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:53 crc kubenswrapper[4937]: I0123 06:55:53.068256 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f894d76a-8583-49b5-b88d-19b8bf52081d-logs\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:53 crc kubenswrapper[4937]: I0123 06:55:53.068286 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f894d76a-8583-49b5-b88d-19b8bf52081d-combined-ca-bundle\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:53 crc kubenswrapper[4937]: I0123 06:55:53.068356 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jmnn\" (UniqueName: \"kubernetes.io/projected/f894d76a-8583-49b5-b88d-19b8bf52081d-kube-api-access-5jmnn\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:53 crc kubenswrapper[4937]: I0123 06:55:53.069456 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f894d76a-8583-49b5-b88d-19b8bf52081d-logs\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:53 crc kubenswrapper[4937]: I0123 06:55:53.080683 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f894d76a-8583-49b5-b88d-19b8bf52081d-config-data\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:53 crc kubenswrapper[4937]: I0123 06:55:53.082622 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f894d76a-8583-49b5-b88d-19b8bf52081d-public-tls-certs\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:53 crc kubenswrapper[4937]: I0123 06:55:53.083423 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f894d76a-8583-49b5-b88d-19b8bf52081d-internal-tls-certs\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:53 crc kubenswrapper[4937]: I0123 06:55:53.084141 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f894d76a-8583-49b5-b88d-19b8bf52081d-combined-ca-bundle\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:53 crc kubenswrapper[4937]: I0123 06:55:53.089280 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f894d76a-8583-49b5-b88d-19b8bf52081d-config-data-custom\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:53 crc kubenswrapper[4937]: I0123 06:55:53.122637 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jmnn\" (UniqueName: \"kubernetes.io/projected/f894d76a-8583-49b5-b88d-19b8bf52081d-kube-api-access-5jmnn\") pod \"barbican-api-57c6ff4958-kv8rb\" (UID: \"f894d76a-8583-49b5-b88d-19b8bf52081d\") " pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:53 crc kubenswrapper[4937]: I0123 06:55:53.390660 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6845797bf7-lmcfd"] Jan 23 06:55:53 crc kubenswrapper[4937]: I0123 06:55:53.424142 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:53 crc kubenswrapper[4937]: I0123 06:55:53.536304 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6845797bf7-lmcfd" event={"ID":"828c40cb-f3e5-48ce-ab59-f201b3e46f35","Type":"ContainerStarted","Data":"f74d2facc88f5eed0374bd6bba10b5c99b0c904a3c6d9e615780c7fe125ac5c4"} Jan 23 06:55:54 crc kubenswrapper[4937]: I0123 06:55:54.001615 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:55:54 crc kubenswrapper[4937]: I0123 06:55:54.020663 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57c6ff4958-kv8rb"] Jan 23 06:55:54 crc kubenswrapper[4937]: I0123 06:55:54.227727 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-8676986cc8-dkgvq" podUID="d71c9df1-567e-4dd0-be98-fb63b23ebca7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Jan 23 06:55:54 crc kubenswrapper[4937]: I0123 06:55:54.303035 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 06:55:54 crc kubenswrapper[4937]: I0123 06:55:54.555035 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57c6ff4958-kv8rb" event={"ID":"f894d76a-8583-49b5-b88d-19b8bf52081d","Type":"ContainerStarted","Data":"bbdfd597fd46b74b0d35fe58be32219a556384761b817fd96aa4d32ec0d4fafe"} Jan 23 06:55:54 crc kubenswrapper[4937]: I0123 06:55:54.555360 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57c6ff4958-kv8rb" event={"ID":"f894d76a-8583-49b5-b88d-19b8bf52081d","Type":"ContainerStarted","Data":"b31b91642683f8474a6e4a6e7f14dd248edb3e0843eb5f5698a71696667325e0"} Jan 23 06:55:54 crc kubenswrapper[4937]: I0123 06:55:54.557322 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6845797bf7-lmcfd" event={"ID":"828c40cb-f3e5-48ce-ab59-f201b3e46f35","Type":"ContainerStarted","Data":"57144db3f7406ee417da95d8612415d37df52bde25bfd5e78ef5c49279ab0682"} Jan 23 06:55:54 crc kubenswrapper[4937]: I0123 06:55:54.557372 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6845797bf7-lmcfd" event={"ID":"828c40cb-f3e5-48ce-ab59-f201b3e46f35","Type":"ContainerStarted","Data":"7e071f5eced2184388a7391d9405061a7e87cc51f21823bfdfbaa82086836640"} Jan 23 06:55:54 crc kubenswrapper[4937]: I0123 06:55:54.558505 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:55:54 crc kubenswrapper[4937]: I0123 06:55:54.578075 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 06:55:54 crc kubenswrapper[4937]: I0123 06:55:54.590080 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6845797bf7-lmcfd" podStartSLOduration=2.590063356 podStartE2EDuration="2.590063356s" podCreationTimestamp="2026-01-23 06:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:55:54.58835038 +0000 UTC m=+1354.392117033" watchObservedRunningTime="2026-01-23 06:55:54.590063356 +0000 UTC m=+1354.393830009" Jan 23 06:55:54 crc kubenswrapper[4937]: I0123 06:55:54.649926 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.129844 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.582260 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57c6ff4958-kv8rb" event={"ID":"f894d76a-8583-49b5-b88d-19b8bf52081d","Type":"ContainerStarted","Data":"6f94e398f4ceb9ac209aab96b0f5d19f89f3127fe266ab9c3e60c53895ca0d3a"} Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.582521 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="046dfbeb-51d7-4486-8465-9b93d542feee" containerName="cinder-scheduler" containerID="cri-o://ed1f03cd83e60a287f5af0c57f69f0ee3a5a50d64c9f718daa4f6f7b1b70d830" gracePeriod=30 Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.582585 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="046dfbeb-51d7-4486-8465-9b93d542feee" containerName="probe" containerID="cri-o://974fa3b405261f55fc06f7d710284fcc6b8344ffefbeda968d8bf94d72a4daff" gracePeriod=30 Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.609882 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-57c6ff4958-kv8rb" podStartSLOduration=3.609863747 podStartE2EDuration="3.609863747s" podCreationTimestamp="2026-01-23 06:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:55:55.60443178 +0000 UTC m=+1355.408198433" watchObservedRunningTime="2026-01-23 06:55:55.609863747 +0000 UTC m=+1355.413630400" Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.762087 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.763319 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.765992 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.766132 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.766172 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-q8rst" Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.780854 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.835858 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b4d887c1-2f50-42c9-9adf-3f4fe512f399-openstack-config\") pod \"openstackclient\" (UID: \"b4d887c1-2f50-42c9-9adf-3f4fe512f399\") " pod="openstack/openstackclient" Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.835965 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d887c1-2f50-42c9-9adf-3f4fe512f399-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b4d887c1-2f50-42c9-9adf-3f4fe512f399\") " pod="openstack/openstackclient" Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.836181 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6chwm\" (UniqueName: \"kubernetes.io/projected/b4d887c1-2f50-42c9-9adf-3f4fe512f399-kube-api-access-6chwm\") pod \"openstackclient\" (UID: \"b4d887c1-2f50-42c9-9adf-3f4fe512f399\") " pod="openstack/openstackclient" Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.836245 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b4d887c1-2f50-42c9-9adf-3f4fe512f399-openstack-config-secret\") pod \"openstackclient\" (UID: \"b4d887c1-2f50-42c9-9adf-3f4fe512f399\") " pod="openstack/openstackclient" Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.863146 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.924931 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6664758949-2wffl"] Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.925215 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6664758949-2wffl" podUID="93868e10-2b6e-4517-a01e-885cdd05fbd8" containerName="dnsmasq-dns" containerID="cri-o://147dfd28451f727d50c1eb185c2a96cede824aae3c9988137195377e6d805f53" gracePeriod=10 Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.937709 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d887c1-2f50-42c9-9adf-3f4fe512f399-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b4d887c1-2f50-42c9-9adf-3f4fe512f399\") " pod="openstack/openstackclient" Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.937827 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6chwm\" (UniqueName: \"kubernetes.io/projected/b4d887c1-2f50-42c9-9adf-3f4fe512f399-kube-api-access-6chwm\") pod \"openstackclient\" (UID: \"b4d887c1-2f50-42c9-9adf-3f4fe512f399\") " pod="openstack/openstackclient" Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.937887 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b4d887c1-2f50-42c9-9adf-3f4fe512f399-openstack-config-secret\") pod \"openstackclient\" (UID: \"b4d887c1-2f50-42c9-9adf-3f4fe512f399\") " pod="openstack/openstackclient" Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.937944 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b4d887c1-2f50-42c9-9adf-3f4fe512f399-openstack-config\") pod \"openstackclient\" (UID: \"b4d887c1-2f50-42c9-9adf-3f4fe512f399\") " pod="openstack/openstackclient" Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.938637 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b4d887c1-2f50-42c9-9adf-3f4fe512f399-openstack-config\") pod \"openstackclient\" (UID: \"b4d887c1-2f50-42c9-9adf-3f4fe512f399\") " pod="openstack/openstackclient" Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.944953 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b4d887c1-2f50-42c9-9adf-3f4fe512f399-openstack-config-secret\") pod \"openstackclient\" (UID: \"b4d887c1-2f50-42c9-9adf-3f4fe512f399\") " pod="openstack/openstackclient" Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.945945 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4d887c1-2f50-42c9-9adf-3f4fe512f399-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b4d887c1-2f50-42c9-9adf-3f4fe512f399\") " pod="openstack/openstackclient" Jan 23 06:55:55 crc kubenswrapper[4937]: I0123 06:55:55.975637 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6chwm\" (UniqueName: \"kubernetes.io/projected/b4d887c1-2f50-42c9-9adf-3f4fe512f399-kube-api-access-6chwm\") pod \"openstackclient\" (UID: \"b4d887c1-2f50-42c9-9adf-3f4fe512f399\") " pod="openstack/openstackclient" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.081477 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.507265 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.596311 4937 generic.go:334] "Generic (PLEG): container finished" podID="93868e10-2b6e-4517-a01e-885cdd05fbd8" containerID="147dfd28451f727d50c1eb185c2a96cede824aae3c9988137195377e6d805f53" exitCode=0 Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.596940 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6664758949-2wffl" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.597349 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6664758949-2wffl" event={"ID":"93868e10-2b6e-4517-a01e-885cdd05fbd8","Type":"ContainerDied","Data":"147dfd28451f727d50c1eb185c2a96cede824aae3c9988137195377e6d805f53"} Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.597383 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6664758949-2wffl" event={"ID":"93868e10-2b6e-4517-a01e-885cdd05fbd8","Type":"ContainerDied","Data":"426eef3558dd1260443e6c747f7f8857e41409844046d95e7eb2f6b07c21b5f7"} Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.597400 4937 scope.go:117] "RemoveContainer" containerID="147dfd28451f727d50c1eb185c2a96cede824aae3c9988137195377e6d805f53" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.598035 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.598065 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.632994 4937 scope.go:117] "RemoveContainer" containerID="46e579f078c9fa6c6f6a68b3dabbda2f33340e1352766bc05dc842834de1c737" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.674673 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-ovsdbserver-nb\") pod \"93868e10-2b6e-4517-a01e-885cdd05fbd8\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.674724 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q62hw\" (UniqueName: \"kubernetes.io/projected/93868e10-2b6e-4517-a01e-885cdd05fbd8-kube-api-access-q62hw\") pod \"93868e10-2b6e-4517-a01e-885cdd05fbd8\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.674785 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-dns-svc\") pod \"93868e10-2b6e-4517-a01e-885cdd05fbd8\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.674882 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-ovsdbserver-sb\") pod \"93868e10-2b6e-4517-a01e-885cdd05fbd8\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.674975 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-config\") pod \"93868e10-2b6e-4517-a01e-885cdd05fbd8\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.675000 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-dns-swift-storage-0\") pod \"93868e10-2b6e-4517-a01e-885cdd05fbd8\" (UID: \"93868e10-2b6e-4517-a01e-885cdd05fbd8\") " Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.695418 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93868e10-2b6e-4517-a01e-885cdd05fbd8-kube-api-access-q62hw" (OuterVolumeSpecName: "kube-api-access-q62hw") pod "93868e10-2b6e-4517-a01e-885cdd05fbd8" (UID: "93868e10-2b6e-4517-a01e-885cdd05fbd8"). InnerVolumeSpecName "kube-api-access-q62hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.721610 4937 scope.go:117] "RemoveContainer" containerID="147dfd28451f727d50c1eb185c2a96cede824aae3c9988137195377e6d805f53" Jan 23 06:55:56 crc kubenswrapper[4937]: E0123 06:55:56.737782 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"147dfd28451f727d50c1eb185c2a96cede824aae3c9988137195377e6d805f53\": container with ID starting with 147dfd28451f727d50c1eb185c2a96cede824aae3c9988137195377e6d805f53 not found: ID does not exist" containerID="147dfd28451f727d50c1eb185c2a96cede824aae3c9988137195377e6d805f53" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.738037 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"147dfd28451f727d50c1eb185c2a96cede824aae3c9988137195377e6d805f53"} err="failed to get container status \"147dfd28451f727d50c1eb185c2a96cede824aae3c9988137195377e6d805f53\": rpc error: code = NotFound desc = could not find container \"147dfd28451f727d50c1eb185c2a96cede824aae3c9988137195377e6d805f53\": container with ID starting with 147dfd28451f727d50c1eb185c2a96cede824aae3c9988137195377e6d805f53 not found: ID does not exist" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.738157 4937 scope.go:117] "RemoveContainer" containerID="46e579f078c9fa6c6f6a68b3dabbda2f33340e1352766bc05dc842834de1c737" Jan 23 06:55:56 crc kubenswrapper[4937]: E0123 06:55:56.740554 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46e579f078c9fa6c6f6a68b3dabbda2f33340e1352766bc05dc842834de1c737\": container with ID starting with 46e579f078c9fa6c6f6a68b3dabbda2f33340e1352766bc05dc842834de1c737 not found: ID does not exist" containerID="46e579f078c9fa6c6f6a68b3dabbda2f33340e1352766bc05dc842834de1c737" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.740690 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46e579f078c9fa6c6f6a68b3dabbda2f33340e1352766bc05dc842834de1c737"} err="failed to get container status \"46e579f078c9fa6c6f6a68b3dabbda2f33340e1352766bc05dc842834de1c737\": rpc error: code = NotFound desc = could not find container \"46e579f078c9fa6c6f6a68b3dabbda2f33340e1352766bc05dc842834de1c737\": container with ID starting with 46e579f078c9fa6c6f6a68b3dabbda2f33340e1352766bc05dc842834de1c737 not found: ID does not exist" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.778819 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "93868e10-2b6e-4517-a01e-885cdd05fbd8" (UID: "93868e10-2b6e-4517-a01e-885cdd05fbd8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.790727 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.790933 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q62hw\" (UniqueName: \"kubernetes.io/projected/93868e10-2b6e-4517-a01e-885cdd05fbd8-kube-api-access-q62hw\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.811770 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "93868e10-2b6e-4517-a01e-885cdd05fbd8" (UID: "93868e10-2b6e-4517-a01e-885cdd05fbd8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.812104 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-config" (OuterVolumeSpecName: "config") pod "93868e10-2b6e-4517-a01e-885cdd05fbd8" (UID: "93868e10-2b6e-4517-a01e-885cdd05fbd8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.828282 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "93868e10-2b6e-4517-a01e-885cdd05fbd8" (UID: "93868e10-2b6e-4517-a01e-885cdd05fbd8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.832423 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "93868e10-2b6e-4517-a01e-885cdd05fbd8" (UID: "93868e10-2b6e-4517-a01e-885cdd05fbd8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.853892 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.892695 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.892731 4937 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.892743 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.892752 4937 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93868e10-2b6e-4517-a01e-885cdd05fbd8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.939326 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6664758949-2wffl"] Jan 23 06:55:56 crc kubenswrapper[4937]: I0123 06:55:56.950382 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6664758949-2wffl"] Jan 23 06:55:57 crc kubenswrapper[4937]: I0123 06:55:57.648761 4937 generic.go:334] "Generic (PLEG): container finished" podID="046dfbeb-51d7-4486-8465-9b93d542feee" containerID="974fa3b405261f55fc06f7d710284fcc6b8344ffefbeda968d8bf94d72a4daff" exitCode=0 Jan 23 06:55:57 crc kubenswrapper[4937]: I0123 06:55:57.648992 4937 generic.go:334] "Generic (PLEG): container finished" podID="046dfbeb-51d7-4486-8465-9b93d542feee" containerID="ed1f03cd83e60a287f5af0c57f69f0ee3a5a50d64c9f718daa4f6f7b1b70d830" exitCode=0 Jan 23 06:55:57 crc kubenswrapper[4937]: I0123 06:55:57.649035 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"046dfbeb-51d7-4486-8465-9b93d542feee","Type":"ContainerDied","Data":"974fa3b405261f55fc06f7d710284fcc6b8344ffefbeda968d8bf94d72a4daff"} Jan 23 06:55:57 crc kubenswrapper[4937]: I0123 06:55:57.649061 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"046dfbeb-51d7-4486-8465-9b93d542feee","Type":"ContainerDied","Data":"ed1f03cd83e60a287f5af0c57f69f0ee3a5a50d64c9f718daa4f6f7b1b70d830"} Jan 23 06:55:57 crc kubenswrapper[4937]: I0123 06:55:57.652058 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b4d887c1-2f50-42c9-9adf-3f4fe512f399","Type":"ContainerStarted","Data":"e87ba137e2c80e2ad4009db962d240256e24c2b92848811cee44bbdacce852db"} Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.054018 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.054953 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.056052 4937 scope.go:117] "RemoveContainer" containerID="3f56ab8a8e7d0455ce48848279cc49245b73910866cf9d81886ac5f4cd16962a" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.107258 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.221286 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-config-data-custom\") pod \"046dfbeb-51d7-4486-8465-9b93d542feee\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.221369 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-config-data\") pod \"046dfbeb-51d7-4486-8465-9b93d542feee\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.221406 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/046dfbeb-51d7-4486-8465-9b93d542feee-etc-machine-id\") pod \"046dfbeb-51d7-4486-8465-9b93d542feee\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.221441 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-scripts\") pod \"046dfbeb-51d7-4486-8465-9b93d542feee\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.221613 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh7wl\" (UniqueName: \"kubernetes.io/projected/046dfbeb-51d7-4486-8465-9b93d542feee-kube-api-access-dh7wl\") pod \"046dfbeb-51d7-4486-8465-9b93d542feee\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.221712 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-combined-ca-bundle\") pod \"046dfbeb-51d7-4486-8465-9b93d542feee\" (UID: \"046dfbeb-51d7-4486-8465-9b93d542feee\") " Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.222005 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/046dfbeb-51d7-4486-8465-9b93d542feee-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "046dfbeb-51d7-4486-8465-9b93d542feee" (UID: "046dfbeb-51d7-4486-8465-9b93d542feee"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.226425 4937 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/046dfbeb-51d7-4486-8465-9b93d542feee-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.248867 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/046dfbeb-51d7-4486-8465-9b93d542feee-kube-api-access-dh7wl" (OuterVolumeSpecName: "kube-api-access-dh7wl") pod "046dfbeb-51d7-4486-8465-9b93d542feee" (UID: "046dfbeb-51d7-4486-8465-9b93d542feee"). InnerVolumeSpecName "kube-api-access-dh7wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.255162 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "046dfbeb-51d7-4486-8465-9b93d542feee" (UID: "046dfbeb-51d7-4486-8465-9b93d542feee"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.264301 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-scripts" (OuterVolumeSpecName: "scripts") pod "046dfbeb-51d7-4486-8465-9b93d542feee" (UID: "046dfbeb-51d7-4486-8465-9b93d542feee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.328666 4937 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.328710 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.328723 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh7wl\" (UniqueName: \"kubernetes.io/projected/046dfbeb-51d7-4486-8465-9b93d542feee-kube-api-access-dh7wl\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.359623 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "046dfbeb-51d7-4486-8465-9b93d542feee" (UID: "046dfbeb-51d7-4486-8465-9b93d542feee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.414875 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-config-data" (OuterVolumeSpecName: "config-data") pod "046dfbeb-51d7-4486-8465-9b93d542feee" (UID: "046dfbeb-51d7-4486-8465-9b93d542feee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.430619 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.430660 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/046dfbeb-51d7-4486-8465-9b93d542feee-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.540511 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93868e10-2b6e-4517-a01e-885cdd05fbd8" path="/var/lib/kubelet/pods/93868e10-2b6e-4517-a01e-885cdd05fbd8/volumes" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.549901 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.669248 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"438bef39-6283-4b17-b551-74f127660dbd","Type":"ContainerStarted","Data":"000f4fbf08213abbde3dddec02a2b46b353b3fd5bbddd5f0e670be55e478e7cf"} Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.672473 4937 generic.go:334] "Generic (PLEG): container finished" podID="e02394c2-2975-4589-9670-7c69fa89cb1d" containerID="80443c55ed14927bc9a1cbd967a29c2f5b3f944226efa99da34d17629e5d73ce" exitCode=0 Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.672543 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2n4zb" event={"ID":"e02394c2-2975-4589-9670-7c69fa89cb1d","Type":"ContainerDied","Data":"80443c55ed14927bc9a1cbd967a29c2f5b3f944226efa99da34d17629e5d73ce"} Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.675651 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"046dfbeb-51d7-4486-8465-9b93d542feee","Type":"ContainerDied","Data":"e8798b06548f1cdb58e32a3d724053308b0d10a907b8cbbc33632c59aa00795d"} Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.675694 4937 scope.go:117] "RemoveContainer" containerID="974fa3b405261f55fc06f7d710284fcc6b8344ffefbeda968d8bf94d72a4daff" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.676110 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.726089 4937 scope.go:117] "RemoveContainer" containerID="ed1f03cd83e60a287f5af0c57f69f0ee3a5a50d64c9f718daa4f6f7b1b70d830" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.749800 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.773039 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.784815 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 06:55:58 crc kubenswrapper[4937]: E0123 06:55:58.785356 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="046dfbeb-51d7-4486-8465-9b93d542feee" containerName="cinder-scheduler" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.785381 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="046dfbeb-51d7-4486-8465-9b93d542feee" containerName="cinder-scheduler" Jan 23 06:55:58 crc kubenswrapper[4937]: E0123 06:55:58.785407 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93868e10-2b6e-4517-a01e-885cdd05fbd8" containerName="dnsmasq-dns" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.785417 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="93868e10-2b6e-4517-a01e-885cdd05fbd8" containerName="dnsmasq-dns" Jan 23 06:55:58 crc kubenswrapper[4937]: E0123 06:55:58.785441 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="046dfbeb-51d7-4486-8465-9b93d542feee" containerName="probe" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.785452 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="046dfbeb-51d7-4486-8465-9b93d542feee" containerName="probe" Jan 23 06:55:58 crc kubenswrapper[4937]: E0123 06:55:58.785465 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93868e10-2b6e-4517-a01e-885cdd05fbd8" containerName="init" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.785472 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="93868e10-2b6e-4517-a01e-885cdd05fbd8" containerName="init" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.785699 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="046dfbeb-51d7-4486-8465-9b93d542feee" containerName="cinder-scheduler" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.785715 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="046dfbeb-51d7-4486-8465-9b93d542feee" containerName="probe" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.785748 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="93868e10-2b6e-4517-a01e-885cdd05fbd8" containerName="dnsmasq-dns" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.787232 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.789989 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.799387 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.939704 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.939775 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r5zw\" (UniqueName: \"kubernetes.io/projected/469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7-kube-api-access-5r5zw\") pod \"cinder-scheduler-0\" (UID: \"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.939810 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7-scripts\") pod \"cinder-scheduler-0\" (UID: \"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.939841 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.939894 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:58 crc kubenswrapper[4937]: I0123 06:55:58.939926 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7-config-data\") pod \"cinder-scheduler-0\" (UID: \"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.011849 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-77f756c697-sr9wm"] Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.019985 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.027530 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.027693 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.027546 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.041628 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.041686 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r5zw\" (UniqueName: \"kubernetes.io/projected/469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7-kube-api-access-5r5zw\") pod \"cinder-scheduler-0\" (UID: \"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.041713 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7-scripts\") pod \"cinder-scheduler-0\" (UID: \"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.041746 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.041790 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.041799 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.041817 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7-config-data\") pod \"cinder-scheduler-0\" (UID: \"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.047183 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.047845 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7-scripts\") pod \"cinder-scheduler-0\" (UID: \"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.048548 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7-config-data\") pod \"cinder-scheduler-0\" (UID: \"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.052176 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.080408 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r5zw\" (UniqueName: \"kubernetes.io/projected/469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7-kube-api-access-5r5zw\") pod \"cinder-scheduler-0\" (UID: \"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7\") " pod="openstack/cinder-scheduler-0" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.099311 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-77f756c697-sr9wm"] Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.129160 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.153780 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f843f7b9-df1a-4df3-b8e7-bf007d785f62-etc-swift\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.153823 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f843f7b9-df1a-4df3-b8e7-bf007d785f62-run-httpd\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.153889 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f843f7b9-df1a-4df3-b8e7-bf007d785f62-log-httpd\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.153914 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f843f7b9-df1a-4df3-b8e7-bf007d785f62-internal-tls-certs\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.153968 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f843f7b9-df1a-4df3-b8e7-bf007d785f62-combined-ca-bundle\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.153990 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f843f7b9-df1a-4df3-b8e7-bf007d785f62-public-tls-certs\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.154011 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krw4m\" (UniqueName: \"kubernetes.io/projected/f843f7b9-df1a-4df3-b8e7-bf007d785f62-kube-api-access-krw4m\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.154043 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f843f7b9-df1a-4df3-b8e7-bf007d785f62-config-data\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.256831 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f843f7b9-df1a-4df3-b8e7-bf007d785f62-etc-swift\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.257149 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f843f7b9-df1a-4df3-b8e7-bf007d785f62-run-httpd\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.257245 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f843f7b9-df1a-4df3-b8e7-bf007d785f62-log-httpd\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.257310 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f843f7b9-df1a-4df3-b8e7-bf007d785f62-internal-tls-certs\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.257418 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f843f7b9-df1a-4df3-b8e7-bf007d785f62-combined-ca-bundle\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.257863 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f843f7b9-df1a-4df3-b8e7-bf007d785f62-public-tls-certs\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.257894 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krw4m\" (UniqueName: \"kubernetes.io/projected/f843f7b9-df1a-4df3-b8e7-bf007d785f62-kube-api-access-krw4m\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.257952 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f843f7b9-df1a-4df3-b8e7-bf007d785f62-config-data\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.259164 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f843f7b9-df1a-4df3-b8e7-bf007d785f62-run-httpd\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.259212 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f843f7b9-df1a-4df3-b8e7-bf007d785f62-log-httpd\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.262754 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f843f7b9-df1a-4df3-b8e7-bf007d785f62-public-tls-certs\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.262960 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f843f7b9-df1a-4df3-b8e7-bf007d785f62-config-data\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.264880 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f843f7b9-df1a-4df3-b8e7-bf007d785f62-combined-ca-bundle\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.270673 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f843f7b9-df1a-4df3-b8e7-bf007d785f62-internal-tls-certs\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.276308 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f843f7b9-df1a-4df3-b8e7-bf007d785f62-etc-swift\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.279284 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krw4m\" (UniqueName: \"kubernetes.io/projected/f843f7b9-df1a-4df3-b8e7-bf007d785f62-kube-api-access-krw4m\") pod \"swift-proxy-77f756c697-sr9wm\" (UID: \"f843f7b9-df1a-4df3-b8e7-bf007d785f62\") " pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.478884 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:55:59 crc kubenswrapper[4937]: I0123 06:55:59.750862 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 06:55:59 crc kubenswrapper[4937]: W0123 06:55:59.761148 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod469f0c1e_e8bd_41ef_9d99_a4c83cc4beb7.slice/crio-1ead0361d9bc834e5b0237555389769d3691382c7fb96871d0e6ae4467454618 WatchSource:0}: Error finding container 1ead0361d9bc834e5b0237555389769d3691382c7fb96871d0e6ae4467454618: Status 404 returned error can't find the container with id 1ead0361d9bc834e5b0237555389769d3691382c7fb96871d0e6ae4467454618 Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.103177 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-77f756c697-sr9wm"] Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.131716 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.241513 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.445896 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2n4zb" Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.555700 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="046dfbeb-51d7-4486-8465-9b93d542feee" path="/var/lib/kubelet/pods/046dfbeb-51d7-4486-8465-9b93d542feee/volumes" Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.627064 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e02394c2-2975-4589-9670-7c69fa89cb1d-config-data\") pod \"e02394c2-2975-4589-9670-7c69fa89cb1d\" (UID: \"e02394c2-2975-4589-9670-7c69fa89cb1d\") " Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.627176 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e02394c2-2975-4589-9670-7c69fa89cb1d-combined-ca-bundle\") pod \"e02394c2-2975-4589-9670-7c69fa89cb1d\" (UID: \"e02394c2-2975-4589-9670-7c69fa89cb1d\") " Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.627228 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e02394c2-2975-4589-9670-7c69fa89cb1d-db-sync-config-data\") pod \"e02394c2-2975-4589-9670-7c69fa89cb1d\" (UID: \"e02394c2-2975-4589-9670-7c69fa89cb1d\") " Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.627285 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp4rn\" (UniqueName: \"kubernetes.io/projected/e02394c2-2975-4589-9670-7c69fa89cb1d-kube-api-access-jp4rn\") pod \"e02394c2-2975-4589-9670-7c69fa89cb1d\" (UID: \"e02394c2-2975-4589-9670-7c69fa89cb1d\") " Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.636890 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e02394c2-2975-4589-9670-7c69fa89cb1d-kube-api-access-jp4rn" (OuterVolumeSpecName: "kube-api-access-jp4rn") pod "e02394c2-2975-4589-9670-7c69fa89cb1d" (UID: "e02394c2-2975-4589-9670-7c69fa89cb1d"). InnerVolumeSpecName "kube-api-access-jp4rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.641505 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e02394c2-2975-4589-9670-7c69fa89cb1d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e02394c2-2975-4589-9670-7c69fa89cb1d" (UID: "e02394c2-2975-4589-9670-7c69fa89cb1d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.681922 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e02394c2-2975-4589-9670-7c69fa89cb1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e02394c2-2975-4589-9670-7c69fa89cb1d" (UID: "e02394c2-2975-4589-9670-7c69fa89cb1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.712708 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e02394c2-2975-4589-9670-7c69fa89cb1d-config-data" (OuterVolumeSpecName: "config-data") pod "e02394c2-2975-4589-9670-7c69fa89cb1d" (UID: "e02394c2-2975-4589-9670-7c69fa89cb1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.714504 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77f756c697-sr9wm" event={"ID":"f843f7b9-df1a-4df3-b8e7-bf007d785f62","Type":"ContainerStarted","Data":"84949bf5677586b699573faec66a4ced0f4d0905f8476113f7841801957adc9a"} Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.714546 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77f756c697-sr9wm" event={"ID":"f843f7b9-df1a-4df3-b8e7-bf007d785f62","Type":"ContainerStarted","Data":"2f43a905a88de0b55ff60f5e03abdefe1cf6b945a62412965e2e28749a420f9c"} Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.717402 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2n4zb" Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.717399 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2n4zb" event={"ID":"e02394c2-2975-4589-9670-7c69fa89cb1d","Type":"ContainerDied","Data":"b9ac35dc3e6f33d835645aa3239cb119ec35d6af0b009accf4caa0095993a6fc"} Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.718009 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9ac35dc3e6f33d835645aa3239cb119ec35d6af0b009accf4caa0095993a6fc" Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.729312 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7","Type":"ContainerStarted","Data":"1ead0361d9bc834e5b0237555389769d3691382c7fb96871d0e6ae4467454618"} Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.730429 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e02394c2-2975-4589-9670-7c69fa89cb1d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.730452 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e02394c2-2975-4589-9670-7c69fa89cb1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.730463 4937 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e02394c2-2975-4589-9670-7c69fa89cb1d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.730472 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp4rn\" (UniqueName: \"kubernetes.io/projected/e02394c2-2975-4589-9670-7c69fa89cb1d-kube-api-access-jp4rn\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:00 crc kubenswrapper[4937]: I0123 06:56:00.794917 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.534637 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65f85c5bcf-rjmhn"] Jan 23 06:56:01 crc kubenswrapper[4937]: E0123 06:56:01.535307 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e02394c2-2975-4589-9670-7c69fa89cb1d" containerName="glance-db-sync" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.535321 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="e02394c2-2975-4589-9670-7c69fa89cb1d" containerName="glance-db-sync" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.535504 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="e02394c2-2975-4589-9670-7c69fa89cb1d" containerName="glance-db-sync" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.536697 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.570147 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65f85c5bcf-rjmhn"] Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.661026 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-ovsdbserver-sb\") pod \"dnsmasq-dns-65f85c5bcf-rjmhn\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.661166 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-dns-swift-storage-0\") pod \"dnsmasq-dns-65f85c5bcf-rjmhn\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.661193 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-dns-svc\") pod \"dnsmasq-dns-65f85c5bcf-rjmhn\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.661224 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgbxn\" (UniqueName: \"kubernetes.io/projected/a12faab4-277a-44f5-becc-bd8f273ae701-kube-api-access-dgbxn\") pod \"dnsmasq-dns-65f85c5bcf-rjmhn\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.661250 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-config\") pod \"dnsmasq-dns-65f85c5bcf-rjmhn\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.661275 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-ovsdbserver-nb\") pod \"dnsmasq-dns-65f85c5bcf-rjmhn\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.759869 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7","Type":"ContainerStarted","Data":"fb71d62e0e357b530fa9116c561adf520128057986b862f5d1b7a9ea6beddc69"} Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.766100 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-dns-swift-storage-0\") pod \"dnsmasq-dns-65f85c5bcf-rjmhn\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.766146 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-dns-svc\") pod \"dnsmasq-dns-65f85c5bcf-rjmhn\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.766185 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgbxn\" (UniqueName: \"kubernetes.io/projected/a12faab4-277a-44f5-becc-bd8f273ae701-kube-api-access-dgbxn\") pod \"dnsmasq-dns-65f85c5bcf-rjmhn\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.766213 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-config\") pod \"dnsmasq-dns-65f85c5bcf-rjmhn\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.766242 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-ovsdbserver-nb\") pod \"dnsmasq-dns-65f85c5bcf-rjmhn\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.766309 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-ovsdbserver-sb\") pod \"dnsmasq-dns-65f85c5bcf-rjmhn\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.767275 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-ovsdbserver-sb\") pod \"dnsmasq-dns-65f85c5bcf-rjmhn\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.767993 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-ovsdbserver-nb\") pod \"dnsmasq-dns-65f85c5bcf-rjmhn\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.768128 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-dns-swift-storage-0\") pod \"dnsmasq-dns-65f85c5bcf-rjmhn\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.768358 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77f756c697-sr9wm" event={"ID":"f843f7b9-df1a-4df3-b8e7-bf007d785f62","Type":"ContainerStarted","Data":"b9d7f2f92668a03164e41fbe6f70e7e599ab838e3a4116b3de1b7079b6f356fc"} Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.768471 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.768490 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.768813 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-config\") pod \"dnsmasq-dns-65f85c5bcf-rjmhn\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.769380 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-dns-svc\") pod \"dnsmasq-dns-65f85c5bcf-rjmhn\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.800482 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgbxn\" (UniqueName: \"kubernetes.io/projected/a12faab4-277a-44f5-becc-bd8f273ae701-kube-api-access-dgbxn\") pod \"dnsmasq-dns-65f85c5bcf-rjmhn\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.801329 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-77f756c697-sr9wm" podStartSLOduration=3.801318734 podStartE2EDuration="3.801318734s" podCreationTimestamp="2026-01-23 06:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:56:01.799001511 +0000 UTC m=+1361.602768164" watchObservedRunningTime="2026-01-23 06:56:01.801318734 +0000 UTC m=+1361.605085387" Jan 23 06:56:01 crc kubenswrapper[4937]: I0123 06:56:01.885148 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.306969 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.308774 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.312022 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.312204 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.312347 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ngf4q" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.395632 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.484820 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.485088 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41ceb1c-4607-42b4-ba84-07bd05554a7b-config-data\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.485161 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e41ceb1c-4607-42b4-ba84-07bd05554a7b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.485273 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f6g2\" (UniqueName: \"kubernetes.io/projected/e41ceb1c-4607-42b4-ba84-07bd05554a7b-kube-api-access-9f6g2\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.485304 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41ceb1c-4607-42b4-ba84-07bd05554a7b-scripts\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.485358 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41ceb1c-4607-42b4-ba84-07bd05554a7b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.485378 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e41ceb1c-4607-42b4-ba84-07bd05554a7b-logs\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.518268 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65f85c5bcf-rjmhn"] Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.586857 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41ceb1c-4607-42b4-ba84-07bd05554a7b-config-data\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.587210 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e41ceb1c-4607-42b4-ba84-07bd05554a7b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.587322 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f6g2\" (UniqueName: \"kubernetes.io/projected/e41ceb1c-4607-42b4-ba84-07bd05554a7b-kube-api-access-9f6g2\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.587351 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41ceb1c-4607-42b4-ba84-07bd05554a7b-scripts\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.587424 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41ceb1c-4607-42b4-ba84-07bd05554a7b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.587454 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e41ceb1c-4607-42b4-ba84-07bd05554a7b-logs\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.587504 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.588819 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e41ceb1c-4607-42b4-ba84-07bd05554a7b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.588909 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e41ceb1c-4607-42b4-ba84-07bd05554a7b-logs\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.589102 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.602675 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41ceb1c-4607-42b4-ba84-07bd05554a7b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.605410 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41ceb1c-4607-42b4-ba84-07bd05554a7b-config-data\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.612352 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41ceb1c-4607-42b4-ba84-07bd05554a7b-scripts\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.620800 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f6g2\" (UniqueName: \"kubernetes.io/projected/e41ceb1c-4607-42b4-ba84-07bd05554a7b-kube-api-access-9f6g2\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.645894 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.700941 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.744583 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.746496 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.750356 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.769077 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.817365 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" event={"ID":"a12faab4-277a-44f5-becc-bd8f273ae701","Type":"ContainerStarted","Data":"30735a0c892151023eeb3d2a9e453376fff23246bbeec52ae6f79144391064bb"} Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.899816 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db28c0b0-f910-42ce-a178-9d98da20f41c-logs\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.900131 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db28c0b0-f910-42ce-a178-9d98da20f41c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.900180 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9v42\" (UniqueName: \"kubernetes.io/projected/db28c0b0-f910-42ce-a178-9d98da20f41c-kube-api-access-v9v42\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.900236 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db28c0b0-f910-42ce-a178-9d98da20f41c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.900269 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db28c0b0-f910-42ce-a178-9d98da20f41c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.900316 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:02 crc kubenswrapper[4937]: I0123 06:56:02.900368 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db28c0b0-f910-42ce-a178-9d98da20f41c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.005745 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db28c0b0-f910-42ce-a178-9d98da20f41c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.005818 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.005866 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db28c0b0-f910-42ce-a178-9d98da20f41c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.005967 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db28c0b0-f910-42ce-a178-9d98da20f41c-logs\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.006023 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db28c0b0-f910-42ce-a178-9d98da20f41c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.006060 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9v42\" (UniqueName: \"kubernetes.io/projected/db28c0b0-f910-42ce-a178-9d98da20f41c-kube-api-access-v9v42\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.006107 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db28c0b0-f910-42ce-a178-9d98da20f41c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.006529 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.006751 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db28c0b0-f910-42ce-a178-9d98da20f41c-logs\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.014173 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db28c0b0-f910-42ce-a178-9d98da20f41c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.017772 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db28c0b0-f910-42ce-a178-9d98da20f41c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.039529 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db28c0b0-f910-42ce-a178-9d98da20f41c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.040098 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db28c0b0-f910-42ce-a178-9d98da20f41c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.045693 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9v42\" (UniqueName: \"kubernetes.io/projected/db28c0b0-f910-42ce-a178-9d98da20f41c-kube-api-access-v9v42\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.152253 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.217009 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.338062 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-74dfd7457b-nnk7x" Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.449282 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.487942 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.882685 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7","Type":"ContainerStarted","Data":"f534b4d1bd05c28b5fb0cf97340530076cc0816254b5efbe08b1531c8e395d99"} Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.898875 4937 generic.go:334] "Generic (PLEG): container finished" podID="a12faab4-277a-44f5-becc-bd8f273ae701" containerID="90d98424e81038b18cee86ea7e18a34d0dcd560e7c31e54dc2fffa1fba399776" exitCode=0 Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.899284 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" event={"ID":"a12faab4-277a-44f5-becc-bd8f273ae701","Type":"ContainerDied","Data":"90d98424e81038b18cee86ea7e18a34d0dcd560e7c31e54dc2fffa1fba399776"} Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.915222 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e41ceb1c-4607-42b4-ba84-07bd05554a7b","Type":"ContainerStarted","Data":"a9e5d9aef4021a9d367d80b9efc9f2bfff8e79e417a6ca30f28db30639d3541c"} Jan 23 06:56:03 crc kubenswrapper[4937]: I0123 06:56:03.916300 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.916278587 podStartE2EDuration="5.916278587s" podCreationTimestamp="2026-01-23 06:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:56:03.900929361 +0000 UTC m=+1363.704696034" watchObservedRunningTime="2026-01-23 06:56:03.916278587 +0000 UTC m=+1363.720045240" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.135083 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.230784 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-8676986cc8-dkgvq" podUID="d71c9df1-567e-4dd0-be98-fb63b23ebca7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.166:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.166:8443: connect: connection refused" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.230938 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.271087 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:56:04 crc kubenswrapper[4937]: W0123 06:56:04.343537 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb28c0b0_f910_42ce_a178_9d98da20f41c.slice/crio-d805a18e3e8564641550d1861534bcc7ac2ba2da326c937f5b3548b29a68f3db WatchSource:0}: Error finding container d805a18e3e8564641550d1861534bcc7ac2ba2da326c937f5b3548b29a68f3db: Status 404 returned error can't find the container with id d805a18e3e8564641550d1861534bcc7ac2ba2da326c937f5b3548b29a68f3db Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.426880 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-jh76z"] Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.428522 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jh76z" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.446892 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jh76z"] Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.629932 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9ngg\" (UniqueName: \"kubernetes.io/projected/e83400c9-5cb0-40c8-907b-92e840794f92-kube-api-access-b9ngg\") pod \"nova-api-db-create-jh76z\" (UID: \"e83400c9-5cb0-40c8-907b-92e840794f92\") " pod="openstack/nova-api-db-create-jh76z" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.630040 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e83400c9-5cb0-40c8-907b-92e840794f92-operator-scripts\") pod \"nova-api-db-create-jh76z\" (UID: \"e83400c9-5cb0-40c8-907b-92e840794f92\") " pod="openstack/nova-api-db-create-jh76z" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.630161 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-634e-account-create-update-px24p"] Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.631421 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-634e-account-create-update-px24p" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.634447 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.642910 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-634e-account-create-update-px24p"] Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.732616 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e83400c9-5cb0-40c8-907b-92e840794f92-operator-scripts\") pod \"nova-api-db-create-jh76z\" (UID: \"e83400c9-5cb0-40c8-907b-92e840794f92\") " pod="openstack/nova-api-db-create-jh76z" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.733862 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r64vb\" (UniqueName: \"kubernetes.io/projected/c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa-kube-api-access-r64vb\") pod \"nova-api-634e-account-create-update-px24p\" (UID: \"c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa\") " pod="openstack/nova-api-634e-account-create-update-px24p" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.733934 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa-operator-scripts\") pod \"nova-api-634e-account-create-update-px24p\" (UID: \"c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa\") " pod="openstack/nova-api-634e-account-create-update-px24p" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.733980 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9ngg\" (UniqueName: \"kubernetes.io/projected/e83400c9-5cb0-40c8-907b-92e840794f92-kube-api-access-b9ngg\") pod \"nova-api-db-create-jh76z\" (UID: \"e83400c9-5cb0-40c8-907b-92e840794f92\") " pod="openstack/nova-api-db-create-jh76z" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.733319 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e83400c9-5cb0-40c8-907b-92e840794f92-operator-scripts\") pod \"nova-api-db-create-jh76z\" (UID: \"e83400c9-5cb0-40c8-907b-92e840794f92\") " pod="openstack/nova-api-db-create-jh76z" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.737655 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-vg7tb"] Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.738876 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vg7tb" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.745325 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vg7tb"] Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.789933 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9ngg\" (UniqueName: \"kubernetes.io/projected/e83400c9-5cb0-40c8-907b-92e840794f92-kube-api-access-b9ngg\") pod \"nova-api-db-create-jh76z\" (UID: \"e83400c9-5cb0-40c8-907b-92e840794f92\") " pod="openstack/nova-api-db-create-jh76z" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.817751 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jh76z" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.839739 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r64vb\" (UniqueName: \"kubernetes.io/projected/c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa-kube-api-access-r64vb\") pod \"nova-api-634e-account-create-update-px24p\" (UID: \"c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa\") " pod="openstack/nova-api-634e-account-create-update-px24p" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.839823 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa-operator-scripts\") pod \"nova-api-634e-account-create-update-px24p\" (UID: \"c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa\") " pod="openstack/nova-api-634e-account-create-update-px24p" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.839896 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nhmf\" (UniqueName: \"kubernetes.io/projected/93b93d08-1c46-4589-bb41-3a6353b03d7f-kube-api-access-7nhmf\") pod \"nova-cell0-db-create-vg7tb\" (UID: \"93b93d08-1c46-4589-bb41-3a6353b03d7f\") " pod="openstack/nova-cell0-db-create-vg7tb" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.839914 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93b93d08-1c46-4589-bb41-3a6353b03d7f-operator-scripts\") pod \"nova-cell0-db-create-vg7tb\" (UID: \"93b93d08-1c46-4589-bb41-3a6353b03d7f\") " pod="openstack/nova-cell0-db-create-vg7tb" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.840856 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa-operator-scripts\") pod \"nova-api-634e-account-create-update-px24p\" (UID: \"c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa\") " pod="openstack/nova-api-634e-account-create-update-px24p" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.842807 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-827z7"] Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.850067 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-827z7" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.870694 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r64vb\" (UniqueName: \"kubernetes.io/projected/c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa-kube-api-access-r64vb\") pod \"nova-api-634e-account-create-update-px24p\" (UID: \"c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa\") " pod="openstack/nova-api-634e-account-create-update-px24p" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.924162 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-6157-account-create-update-8fwxn"] Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.928029 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6157-account-create-update-8fwxn" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.938786 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.941354 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nhmf\" (UniqueName: \"kubernetes.io/projected/93b93d08-1c46-4589-bb41-3a6353b03d7f-kube-api-access-7nhmf\") pod \"nova-cell0-db-create-vg7tb\" (UID: \"93b93d08-1c46-4589-bb41-3a6353b03d7f\") " pod="openstack/nova-cell0-db-create-vg7tb" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.941379 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93b93d08-1c46-4589-bb41-3a6353b03d7f-operator-scripts\") pod \"nova-cell0-db-create-vg7tb\" (UID: \"93b93d08-1c46-4589-bb41-3a6353b03d7f\") " pod="openstack/nova-cell0-db-create-vg7tb" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.942141 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93b93d08-1c46-4589-bb41-3a6353b03d7f-operator-scripts\") pod \"nova-cell0-db-create-vg7tb\" (UID: \"93b93d08-1c46-4589-bb41-3a6353b03d7f\") " pod="openstack/nova-cell0-db-create-vg7tb" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.943746 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-827z7"] Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.966213 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nhmf\" (UniqueName: \"kubernetes.io/projected/93b93d08-1c46-4589-bb41-3a6353b03d7f-kube-api-access-7nhmf\") pod \"nova-cell0-db-create-vg7tb\" (UID: \"93b93d08-1c46-4589-bb41-3a6353b03d7f\") " pod="openstack/nova-cell0-db-create-vg7tb" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.966926 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" event={"ID":"a12faab4-277a-44f5-becc-bd8f273ae701","Type":"ContainerStarted","Data":"68a21ee4de62546f3101e109eb048356066415d95d0fb14d80a41106977ecf7e"} Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.966985 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.968121 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6157-account-create-update-8fwxn"] Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.970398 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db28c0b0-f910-42ce-a178-9d98da20f41c","Type":"ContainerStarted","Data":"d805a18e3e8564641550d1861534bcc7ac2ba2da326c937f5b3548b29a68f3db"} Jan 23 06:56:04 crc kubenswrapper[4937]: I0123 06:56:04.983063 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-634e-account-create-update-px24p" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.040764 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ab28-account-create-update-vqsqw"] Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.042473 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ab28-account-create-update-vqsqw" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.043246 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be-operator-scripts\") pod \"nova-cell0-6157-account-create-update-8fwxn\" (UID: \"6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be\") " pod="openstack/nova-cell0-6157-account-create-update-8fwxn" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.043352 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qglcl\" (UniqueName: \"kubernetes.io/projected/6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be-kube-api-access-qglcl\") pod \"nova-cell0-6157-account-create-update-8fwxn\" (UID: \"6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be\") " pod="openstack/nova-cell0-6157-account-create-update-8fwxn" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.043383 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl79k\" (UniqueName: \"kubernetes.io/projected/da4d3482-c2df-46d9-88ad-d8199d012375-kube-api-access-wl79k\") pod \"nova-cell1-db-create-827z7\" (UID: \"da4d3482-c2df-46d9-88ad-d8199d012375\") " pod="openstack/nova-cell1-db-create-827z7" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.043511 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da4d3482-c2df-46d9-88ad-d8199d012375-operator-scripts\") pod \"nova-cell1-db-create-827z7\" (UID: \"da4d3482-c2df-46d9-88ad-d8199d012375\") " pod="openstack/nova-cell1-db-create-827z7" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.044292 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.046473 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ab28-account-create-update-vqsqw"] Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.049537 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" podStartSLOduration=4.04952479 podStartE2EDuration="4.04952479s" podCreationTimestamp="2026-01-23 06:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:56:05.020149113 +0000 UTC m=+1364.823915776" watchObservedRunningTime="2026-01-23 06:56:05.04952479 +0000 UTC m=+1364.853291443" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.106549 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vg7tb" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.152820 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da4d3482-c2df-46d9-88ad-d8199d012375-operator-scripts\") pod \"nova-cell1-db-create-827z7\" (UID: \"da4d3482-c2df-46d9-88ad-d8199d012375\") " pod="openstack/nova-cell1-db-create-827z7" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.153130 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be-operator-scripts\") pod \"nova-cell0-6157-account-create-update-8fwxn\" (UID: \"6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be\") " pod="openstack/nova-cell0-6157-account-create-update-8fwxn" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.153182 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a931af15-27d9-46f8-ab92-0af812ed5cd5-operator-scripts\") pod \"nova-cell1-ab28-account-create-update-vqsqw\" (UID: \"a931af15-27d9-46f8-ab92-0af812ed5cd5\") " pod="openstack/nova-cell1-ab28-account-create-update-vqsqw" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.153274 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qglcl\" (UniqueName: \"kubernetes.io/projected/6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be-kube-api-access-qglcl\") pod \"nova-cell0-6157-account-create-update-8fwxn\" (UID: \"6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be\") " pod="openstack/nova-cell0-6157-account-create-update-8fwxn" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.153304 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl79k\" (UniqueName: \"kubernetes.io/projected/da4d3482-c2df-46d9-88ad-d8199d012375-kube-api-access-wl79k\") pod \"nova-cell1-db-create-827z7\" (UID: \"da4d3482-c2df-46d9-88ad-d8199d012375\") " pod="openstack/nova-cell1-db-create-827z7" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.153506 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dldqj\" (UniqueName: \"kubernetes.io/projected/a931af15-27d9-46f8-ab92-0af812ed5cd5-kube-api-access-dldqj\") pod \"nova-cell1-ab28-account-create-update-vqsqw\" (UID: \"a931af15-27d9-46f8-ab92-0af812ed5cd5\") " pod="openstack/nova-cell1-ab28-account-create-update-vqsqw" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.154477 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da4d3482-c2df-46d9-88ad-d8199d012375-operator-scripts\") pod \"nova-cell1-db-create-827z7\" (UID: \"da4d3482-c2df-46d9-88ad-d8199d012375\") " pod="openstack/nova-cell1-db-create-827z7" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.155061 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be-operator-scripts\") pod \"nova-cell0-6157-account-create-update-8fwxn\" (UID: \"6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be\") " pod="openstack/nova-cell0-6157-account-create-update-8fwxn" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.209360 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl79k\" (UniqueName: \"kubernetes.io/projected/da4d3482-c2df-46d9-88ad-d8199d012375-kube-api-access-wl79k\") pod \"nova-cell1-db-create-827z7\" (UID: \"da4d3482-c2df-46d9-88ad-d8199d012375\") " pod="openstack/nova-cell1-db-create-827z7" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.216088 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qglcl\" (UniqueName: \"kubernetes.io/projected/6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be-kube-api-access-qglcl\") pod \"nova-cell0-6157-account-create-update-8fwxn\" (UID: \"6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be\") " pod="openstack/nova-cell0-6157-account-create-update-8fwxn" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.218658 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-827z7" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.275703 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dldqj\" (UniqueName: \"kubernetes.io/projected/a931af15-27d9-46f8-ab92-0af812ed5cd5-kube-api-access-dldqj\") pod \"nova-cell1-ab28-account-create-update-vqsqw\" (UID: \"a931af15-27d9-46f8-ab92-0af812ed5cd5\") " pod="openstack/nova-cell1-ab28-account-create-update-vqsqw" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.276905 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a931af15-27d9-46f8-ab92-0af812ed5cd5-operator-scripts\") pod \"nova-cell1-ab28-account-create-update-vqsqw\" (UID: \"a931af15-27d9-46f8-ab92-0af812ed5cd5\") " pod="openstack/nova-cell1-ab28-account-create-update-vqsqw" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.278167 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a931af15-27d9-46f8-ab92-0af812ed5cd5-operator-scripts\") pod \"nova-cell1-ab28-account-create-update-vqsqw\" (UID: \"a931af15-27d9-46f8-ab92-0af812ed5cd5\") " pod="openstack/nova-cell1-ab28-account-create-update-vqsqw" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.281089 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6157-account-create-update-8fwxn" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.313690 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dldqj\" (UniqueName: \"kubernetes.io/projected/a931af15-27d9-46f8-ab92-0af812ed5cd5-kube-api-access-dldqj\") pod \"nova-cell1-ab28-account-create-update-vqsqw\" (UID: \"a931af15-27d9-46f8-ab92-0af812ed5cd5\") " pod="openstack/nova-cell1-ab28-account-create-update-vqsqw" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.604956 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ab28-account-create-update-vqsqw" Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.781179 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-634e-account-create-update-px24p"] Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.829225 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jh76z"] Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.901253 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:56:05 crc kubenswrapper[4937]: W0123 06:56:05.951745 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode83400c9_5cb0_40c8_907b_92e840794f92.slice/crio-5901315c3b49a53318da94a4b03bf39766b41b78d0e6b56e546736b5a82ab428 WatchSource:0}: Error finding container 5901315c3b49a53318da94a4b03bf39766b41b78d0e6b56e546736b5a82ab428: Status 404 returned error can't find the container with id 5901315c3b49a53318da94a4b03bf39766b41b78d0e6b56e546736b5a82ab428 Jan 23 06:56:05 crc kubenswrapper[4937]: I0123 06:56:05.979071 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:56:06 crc kubenswrapper[4937]: I0123 06:56:06.115963 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jh76z" event={"ID":"e83400c9-5cb0-40c8-907b-92e840794f92","Type":"ContainerStarted","Data":"5901315c3b49a53318da94a4b03bf39766b41b78d0e6b56e546736b5a82ab428"} Jan 23 06:56:06 crc kubenswrapper[4937]: I0123 06:56:06.197050 4937 generic.go:334] "Generic (PLEG): container finished" podID="438bef39-6283-4b17-b551-74f127660dbd" containerID="000f4fbf08213abbde3dddec02a2b46b353b3fd5bbddd5f0e670be55e478e7cf" exitCode=1 Jan 23 06:56:06 crc kubenswrapper[4937]: I0123 06:56:06.197385 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"438bef39-6283-4b17-b551-74f127660dbd","Type":"ContainerDied","Data":"000f4fbf08213abbde3dddec02a2b46b353b3fd5bbddd5f0e670be55e478e7cf"} Jan 23 06:56:06 crc kubenswrapper[4937]: I0123 06:56:06.197425 4937 scope.go:117] "RemoveContainer" containerID="3f56ab8a8e7d0455ce48848279cc49245b73910866cf9d81886ac5f4cd16962a" Jan 23 06:56:06 crc kubenswrapper[4937]: I0123 06:56:06.198139 4937 scope.go:117] "RemoveContainer" containerID="000f4fbf08213abbde3dddec02a2b46b353b3fd5bbddd5f0e670be55e478e7cf" Jan 23 06:56:06 crc kubenswrapper[4937]: E0123 06:56:06.198451 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(438bef39-6283-4b17-b551-74f127660dbd)\"" pod="openstack/watcher-decision-engine-0" podUID="438bef39-6283-4b17-b551-74f127660dbd" Jan 23 06:56:06 crc kubenswrapper[4937]: I0123 06:56:06.243850 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e41ceb1c-4607-42b4-ba84-07bd05554a7b","Type":"ContainerStarted","Data":"fab6d2aabc0fbf9fe521aa4845d93ef43a7d0f91632013a3d53abff19d8d9147"} Jan 23 06:56:06 crc kubenswrapper[4937]: I0123 06:56:06.286896 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db28c0b0-f910-42ce-a178-9d98da20f41c","Type":"ContainerStarted","Data":"1e1a78b4c7fba764097cd313090d6143709d521e588bc8b23039735cfa959a52"} Jan 23 06:56:06 crc kubenswrapper[4937]: I0123 06:56:06.298579 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vg7tb"] Jan 23 06:56:06 crc kubenswrapper[4937]: I0123 06:56:06.332733 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-634e-account-create-update-px24p" event={"ID":"c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa","Type":"ContainerStarted","Data":"034c85c301fb5f0164f8a583d98a4c10636c88435472ff2ed69d48795643c73e"} Jan 23 06:56:06 crc kubenswrapper[4937]: I0123 06:56:06.596926 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:56:06 crc kubenswrapper[4937]: I0123 06:56:06.670506 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6157-account-create-update-8fwxn"] Jan 23 06:56:06 crc kubenswrapper[4937]: I0123 06:56:06.711033 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-827z7"] Jan 23 06:56:06 crc kubenswrapper[4937]: W0123 06:56:06.745285 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6dd77c35_b6f2_4e71_a6a9_ca7998a8f6be.slice/crio-b2319eb96ea68247c773640c0c02ce78fd9b6058e3b37fe04dcb225e43365a91 WatchSource:0}: Error finding container b2319eb96ea68247c773640c0c02ce78fd9b6058e3b37fe04dcb225e43365a91: Status 404 returned error can't find the container with id b2319eb96ea68247c773640c0c02ce78fd9b6058e3b37fe04dcb225e43365a91 Jan 23 06:56:06 crc kubenswrapper[4937]: W0123 06:56:06.745981 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda4d3482_c2df_46d9_88ad_d8199d012375.slice/crio-783fb03b29d92c4aa5cfc1e4ffe81b806f49197b5cd7e5489a69b885e07ecb18 WatchSource:0}: Error finding container 783fb03b29d92c4aa5cfc1e4ffe81b806f49197b5cd7e5489a69b885e07ecb18: Status 404 returned error can't find the container with id 783fb03b29d92c4aa5cfc1e4ffe81b806f49197b5cd7e5489a69b885e07ecb18 Jan 23 06:56:06 crc kubenswrapper[4937]: I0123 06:56:06.986669 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ab28-account-create-update-vqsqw"] Jan 23 06:56:07 crc kubenswrapper[4937]: W0123 06:56:07.015547 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda931af15_27d9_46f8_ab92_0af812ed5cd5.slice/crio-b7deb6c33b550b41c0cac9824e5078afd155805e354f9873054ec424da3a6d57 WatchSource:0}: Error finding container b7deb6c33b550b41c0cac9824e5078afd155805e354f9873054ec424da3a6d57: Status 404 returned error can't find the container with id b7deb6c33b550b41c0cac9824e5078afd155805e354f9873054ec424da3a6d57 Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.352889 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-827z7" event={"ID":"da4d3482-c2df-46d9-88ad-d8199d012375","Type":"ContainerStarted","Data":"ed63eebdd30ef78e36cb6e60448149282449eca0e96c6fdf74ff2fc8e73fc07a"} Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.352941 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-827z7" event={"ID":"da4d3482-c2df-46d9-88ad-d8199d012375","Type":"ContainerStarted","Data":"783fb03b29d92c4aa5cfc1e4ffe81b806f49197b5cd7e5489a69b885e07ecb18"} Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.356421 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6157-account-create-update-8fwxn" event={"ID":"6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be","Type":"ContainerStarted","Data":"b2319eb96ea68247c773640c0c02ce78fd9b6058e3b37fe04dcb225e43365a91"} Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.361118 4937 generic.go:334] "Generic (PLEG): container finished" podID="c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa" containerID="8f08204ca761eb2391502cda0f85294e770dff724aeca5d61e6b6f004c590b18" exitCode=0 Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.361170 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-634e-account-create-update-px24p" event={"ID":"c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa","Type":"ContainerDied","Data":"8f08204ca761eb2391502cda0f85294e770dff724aeca5d61e6b6f004c590b18"} Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.364561 4937 generic.go:334] "Generic (PLEG): container finished" podID="e83400c9-5cb0-40c8-907b-92e840794f92" containerID="296b6dea964b365c3d62f1a5fe36508c25fbb696c1820acfb7b44663ed37669b" exitCode=0 Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.364634 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jh76z" event={"ID":"e83400c9-5cb0-40c8-907b-92e840794f92","Type":"ContainerDied","Data":"296b6dea964b365c3d62f1a5fe36508c25fbb696c1820acfb7b44663ed37669b"} Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.369679 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vg7tb" event={"ID":"93b93d08-1c46-4589-bb41-3a6353b03d7f","Type":"ContainerStarted","Data":"2854dddda04b30ea0b28e1f7161d4e05cedc45b0ee2677a32377784d71dea149"} Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.369722 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vg7tb" event={"ID":"93b93d08-1c46-4589-bb41-3a6353b03d7f","Type":"ContainerStarted","Data":"719adc251aadce7bd2b411b1252a419e44fdf4f13f67efa1907dfdb73b3b9ec3"} Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.376102 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e41ceb1c-4607-42b4-ba84-07bd05554a7b","Type":"ContainerStarted","Data":"6ca28b5a2c7e59e7e26b80ea523837e4a23d4681ecffba4decf59edab3b5ea79"} Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.376302 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e41ceb1c-4607-42b4-ba84-07bd05554a7b" containerName="glance-log" containerID="cri-o://fab6d2aabc0fbf9fe521aa4845d93ef43a7d0f91632013a3d53abff19d8d9147" gracePeriod=30 Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.376618 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e41ceb1c-4607-42b4-ba84-07bd05554a7b" containerName="glance-httpd" containerID="cri-o://6ca28b5a2c7e59e7e26b80ea523837e4a23d4681ecffba4decf59edab3b5ea79" gracePeriod=30 Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.379792 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-827z7" podStartSLOduration=3.379773194 podStartE2EDuration="3.379773194s" podCreationTimestamp="2026-01-23 06:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:56:07.364167091 +0000 UTC m=+1367.167933744" watchObservedRunningTime="2026-01-23 06:56:07.379773194 +0000 UTC m=+1367.183539847" Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.392428 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ab28-account-create-update-vqsqw" event={"ID":"a931af15-27d9-46f8-ab92-0af812ed5cd5","Type":"ContainerStarted","Data":"b7deb6c33b550b41c0cac9824e5078afd155805e354f9873054ec424da3a6d57"} Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.414287 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.414269281 podStartE2EDuration="6.414269281s" podCreationTimestamp="2026-01-23 06:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:56:07.400300342 +0000 UTC m=+1367.204066995" watchObservedRunningTime="2026-01-23 06:56:07.414269281 +0000 UTC m=+1367.218035934" Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.431359 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-vg7tb" podStartSLOduration=3.431337843 podStartE2EDuration="3.431337843s" podCreationTimestamp="2026-01-23 06:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:56:07.422535275 +0000 UTC m=+1367.226301928" watchObservedRunningTime="2026-01-23 06:56:07.431337843 +0000 UTC m=+1367.235104496" Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.530141 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-57c6ff4958-kv8rb" Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.602205 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5647cc4676-mx7tx"] Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.602430 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5647cc4676-mx7tx" podUID="e27f321d-6357-47df-a057-31c9036f8ec4" containerName="barbican-api-log" containerID="cri-o://a68c23b26f0f48cd1c8dbe58b94e630124c9b0b9b4da18edbd4e8aeb36862718" gracePeriod=30 Jan 23 06:56:07 crc kubenswrapper[4937]: I0123 06:56:07.602845 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5647cc4676-mx7tx" podUID="e27f321d-6357-47df-a057-31c9036f8ec4" containerName="barbican-api" containerID="cri-o://85e7cc946a04e16ef5e3a9a6efdeff68a8d6dc0943a2a1fb6008655038676907" gracePeriod=30 Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.052437 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.052801 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.053492 4937 scope.go:117] "RemoveContainer" containerID="000f4fbf08213abbde3dddec02a2b46b353b3fd5bbddd5f0e670be55e478e7cf" Jan 23 06:56:08 crc kubenswrapper[4937]: E0123 06:56:08.053785 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(438bef39-6283-4b17-b551-74f127660dbd)\"" pod="openstack/watcher-decision-engine-0" podUID="438bef39-6283-4b17-b551-74f127660dbd" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.360996 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.417973 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db28c0b0-f910-42ce-a178-9d98da20f41c","Type":"ContainerStarted","Data":"3084410021de836b212c2c1ac6be13c55bcceef302cf270a1ebf827f6976d0dd"} Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.418138 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="db28c0b0-f910-42ce-a178-9d98da20f41c" containerName="glance-log" containerID="cri-o://1e1a78b4c7fba764097cd313090d6143709d521e588bc8b23039735cfa959a52" gracePeriod=30 Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.418585 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="db28c0b0-f910-42ce-a178-9d98da20f41c" containerName="glance-httpd" containerID="cri-o://3084410021de836b212c2c1ac6be13c55bcceef302cf270a1ebf827f6976d0dd" gracePeriod=30 Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.427465 4937 generic.go:334] "Generic (PLEG): container finished" podID="a931af15-27d9-46f8-ab92-0af812ed5cd5" containerID="2c132e615275b4476fc34c357601a223b7d9903f80491c44c8d2fc44737182b5" exitCode=0 Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.427527 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ab28-account-create-update-vqsqw" event={"ID":"a931af15-27d9-46f8-ab92-0af812ed5cd5","Type":"ContainerDied","Data":"2c132e615275b4476fc34c357601a223b7d9903f80491c44c8d2fc44737182b5"} Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.440828 4937 generic.go:334] "Generic (PLEG): container finished" podID="e27f321d-6357-47df-a057-31c9036f8ec4" containerID="a68c23b26f0f48cd1c8dbe58b94e630124c9b0b9b4da18edbd4e8aeb36862718" exitCode=143 Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.440955 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5647cc4676-mx7tx" event={"ID":"e27f321d-6357-47df-a057-31c9036f8ec4","Type":"ContainerDied","Data":"a68c23b26f0f48cd1c8dbe58b94e630124c9b0b9b4da18edbd4e8aeb36862718"} Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.454827 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.454803117 podStartE2EDuration="7.454803117s" podCreationTimestamp="2026-01-23 06:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:56:08.448165437 +0000 UTC m=+1368.251932110" watchObservedRunningTime="2026-01-23 06:56:08.454803117 +0000 UTC m=+1368.258569770" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.458876 4937 generic.go:334] "Generic (PLEG): container finished" podID="da4d3482-c2df-46d9-88ad-d8199d012375" containerID="ed63eebdd30ef78e36cb6e60448149282449eca0e96c6fdf74ff2fc8e73fc07a" exitCode=0 Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.458973 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-827z7" event={"ID":"da4d3482-c2df-46d9-88ad-d8199d012375","Type":"ContainerDied","Data":"ed63eebdd30ef78e36cb6e60448149282449eca0e96c6fdf74ff2fc8e73fc07a"} Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.477615 4937 generic.go:334] "Generic (PLEG): container finished" podID="6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be" containerID="4d3a577243c736f82ded9ab9ec82fadb8e5a0ec350e16d5c35a9e585c626e74e" exitCode=0 Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.477729 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6157-account-create-update-8fwxn" event={"ID":"6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be","Type":"ContainerDied","Data":"4d3a577243c736f82ded9ab9ec82fadb8e5a0ec350e16d5c35a9e585c626e74e"} Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.481449 4937 generic.go:334] "Generic (PLEG): container finished" podID="93b93d08-1c46-4589-bb41-3a6353b03d7f" containerID="2854dddda04b30ea0b28e1f7161d4e05cedc45b0ee2677a32377784d71dea149" exitCode=0 Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.481518 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vg7tb" event={"ID":"93b93d08-1c46-4589-bb41-3a6353b03d7f","Type":"ContainerDied","Data":"2854dddda04b30ea0b28e1f7161d4e05cedc45b0ee2677a32377784d71dea149"} Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.493016 4937 generic.go:334] "Generic (PLEG): container finished" podID="e41ceb1c-4607-42b4-ba84-07bd05554a7b" containerID="6ca28b5a2c7e59e7e26b80ea523837e4a23d4681ecffba4decf59edab3b5ea79" exitCode=143 Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.493052 4937 generic.go:334] "Generic (PLEG): container finished" podID="e41ceb1c-4607-42b4-ba84-07bd05554a7b" containerID="fab6d2aabc0fbf9fe521aa4845d93ef43a7d0f91632013a3d53abff19d8d9147" exitCode=143 Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.493123 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e41ceb1c-4607-42b4-ba84-07bd05554a7b","Type":"ContainerDied","Data":"6ca28b5a2c7e59e7e26b80ea523837e4a23d4681ecffba4decf59edab3b5ea79"} Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.493182 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e41ceb1c-4607-42b4-ba84-07bd05554a7b","Type":"ContainerDied","Data":"fab6d2aabc0fbf9fe521aa4845d93ef43a7d0f91632013a3d53abff19d8d9147"} Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.493193 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e41ceb1c-4607-42b4-ba84-07bd05554a7b","Type":"ContainerDied","Data":"a9e5d9aef4021a9d367d80b9efc9f2bfff8e79e417a6ca30f28db30639d3541c"} Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.493193 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.493211 4937 scope.go:117] "RemoveContainer" containerID="6ca28b5a2c7e59e7e26b80ea523837e4a23d4681ecffba4decf59edab3b5ea79" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.540428 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41ceb1c-4607-42b4-ba84-07bd05554a7b-combined-ca-bundle\") pod \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.540818 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e41ceb1c-4607-42b4-ba84-07bd05554a7b-httpd-run\") pod \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.540840 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41ceb1c-4607-42b4-ba84-07bd05554a7b-scripts\") pod \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.540868 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f6g2\" (UniqueName: \"kubernetes.io/projected/e41ceb1c-4607-42b4-ba84-07bd05554a7b-kube-api-access-9f6g2\") pod \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.540980 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e41ceb1c-4607-42b4-ba84-07bd05554a7b-logs\") pod \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.541004 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41ceb1c-4607-42b4-ba84-07bd05554a7b-config-data\") pod \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.541082 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\" (UID: \"e41ceb1c-4607-42b4-ba84-07bd05554a7b\") " Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.548590 4937 scope.go:117] "RemoveContainer" containerID="fab6d2aabc0fbf9fe521aa4845d93ef43a7d0f91632013a3d53abff19d8d9147" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.551270 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e41ceb1c-4607-42b4-ba84-07bd05554a7b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e41ceb1c-4607-42b4-ba84-07bd05554a7b" (UID: "e41ceb1c-4607-42b4-ba84-07bd05554a7b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.552230 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e41ceb1c-4607-42b4-ba84-07bd05554a7b-logs" (OuterVolumeSpecName: "logs") pod "e41ceb1c-4607-42b4-ba84-07bd05554a7b" (UID: "e41ceb1c-4607-42b4-ba84-07bd05554a7b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.553746 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "e41ceb1c-4607-42b4-ba84-07bd05554a7b" (UID: "e41ceb1c-4607-42b4-ba84-07bd05554a7b"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.561929 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e41ceb1c-4607-42b4-ba84-07bd05554a7b-kube-api-access-9f6g2" (OuterVolumeSpecName: "kube-api-access-9f6g2") pod "e41ceb1c-4607-42b4-ba84-07bd05554a7b" (UID: "e41ceb1c-4607-42b4-ba84-07bd05554a7b"). InnerVolumeSpecName "kube-api-access-9f6g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.604810 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41ceb1c-4607-42b4-ba84-07bd05554a7b-scripts" (OuterVolumeSpecName: "scripts") pod "e41ceb1c-4607-42b4-ba84-07bd05554a7b" (UID: "e41ceb1c-4607-42b4-ba84-07bd05554a7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.628145 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41ceb1c-4607-42b4-ba84-07bd05554a7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e41ceb1c-4607-42b4-ba84-07bd05554a7b" (UID: "e41ceb1c-4607-42b4-ba84-07bd05554a7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.645055 4937 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e41ceb1c-4607-42b4-ba84-07bd05554a7b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.645085 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41ceb1c-4607-42b4-ba84-07bd05554a7b-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.645095 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f6g2\" (UniqueName: \"kubernetes.io/projected/e41ceb1c-4607-42b4-ba84-07bd05554a7b-kube-api-access-9f6g2\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.645107 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e41ceb1c-4607-42b4-ba84-07bd05554a7b-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.645125 4937 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.645134 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41ceb1c-4607-42b4-ba84-07bd05554a7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.689041 4937 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.693449 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41ceb1c-4607-42b4-ba84-07bd05554a7b-config-data" (OuterVolumeSpecName: "config-data") pod "e41ceb1c-4607-42b4-ba84-07bd05554a7b" (UID: "e41ceb1c-4607-42b4-ba84-07bd05554a7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.748059 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41ceb1c-4607-42b4-ba84-07bd05554a7b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.748091 4937 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.809970 4937 scope.go:117] "RemoveContainer" containerID="6ca28b5a2c7e59e7e26b80ea523837e4a23d4681ecffba4decf59edab3b5ea79" Jan 23 06:56:08 crc kubenswrapper[4937]: E0123 06:56:08.810345 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ca28b5a2c7e59e7e26b80ea523837e4a23d4681ecffba4decf59edab3b5ea79\": container with ID starting with 6ca28b5a2c7e59e7e26b80ea523837e4a23d4681ecffba4decf59edab3b5ea79 not found: ID does not exist" containerID="6ca28b5a2c7e59e7e26b80ea523837e4a23d4681ecffba4decf59edab3b5ea79" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.810377 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ca28b5a2c7e59e7e26b80ea523837e4a23d4681ecffba4decf59edab3b5ea79"} err="failed to get container status \"6ca28b5a2c7e59e7e26b80ea523837e4a23d4681ecffba4decf59edab3b5ea79\": rpc error: code = NotFound desc = could not find container \"6ca28b5a2c7e59e7e26b80ea523837e4a23d4681ecffba4decf59edab3b5ea79\": container with ID starting with 6ca28b5a2c7e59e7e26b80ea523837e4a23d4681ecffba4decf59edab3b5ea79 not found: ID does not exist" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.810396 4937 scope.go:117] "RemoveContainer" containerID="fab6d2aabc0fbf9fe521aa4845d93ef43a7d0f91632013a3d53abff19d8d9147" Jan 23 06:56:08 crc kubenswrapper[4937]: E0123 06:56:08.811816 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fab6d2aabc0fbf9fe521aa4845d93ef43a7d0f91632013a3d53abff19d8d9147\": container with ID starting with fab6d2aabc0fbf9fe521aa4845d93ef43a7d0f91632013a3d53abff19d8d9147 not found: ID does not exist" containerID="fab6d2aabc0fbf9fe521aa4845d93ef43a7d0f91632013a3d53abff19d8d9147" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.811844 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fab6d2aabc0fbf9fe521aa4845d93ef43a7d0f91632013a3d53abff19d8d9147"} err="failed to get container status \"fab6d2aabc0fbf9fe521aa4845d93ef43a7d0f91632013a3d53abff19d8d9147\": rpc error: code = NotFound desc = could not find container \"fab6d2aabc0fbf9fe521aa4845d93ef43a7d0f91632013a3d53abff19d8d9147\": container with ID starting with fab6d2aabc0fbf9fe521aa4845d93ef43a7d0f91632013a3d53abff19d8d9147 not found: ID does not exist" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.811860 4937 scope.go:117] "RemoveContainer" containerID="6ca28b5a2c7e59e7e26b80ea523837e4a23d4681ecffba4decf59edab3b5ea79" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.813235 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ca28b5a2c7e59e7e26b80ea523837e4a23d4681ecffba4decf59edab3b5ea79"} err="failed to get container status \"6ca28b5a2c7e59e7e26b80ea523837e4a23d4681ecffba4decf59edab3b5ea79\": rpc error: code = NotFound desc = could not find container \"6ca28b5a2c7e59e7e26b80ea523837e4a23d4681ecffba4decf59edab3b5ea79\": container with ID starting with 6ca28b5a2c7e59e7e26b80ea523837e4a23d4681ecffba4decf59edab3b5ea79 not found: ID does not exist" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.813257 4937 scope.go:117] "RemoveContainer" containerID="fab6d2aabc0fbf9fe521aa4845d93ef43a7d0f91632013a3d53abff19d8d9147" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.820060 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fab6d2aabc0fbf9fe521aa4845d93ef43a7d0f91632013a3d53abff19d8d9147"} err="failed to get container status \"fab6d2aabc0fbf9fe521aa4845d93ef43a7d0f91632013a3d53abff19d8d9147\": rpc error: code = NotFound desc = could not find container \"fab6d2aabc0fbf9fe521aa4845d93ef43a7d0f91632013a3d53abff19d8d9147\": container with ID starting with fab6d2aabc0fbf9fe521aa4845d93ef43a7d0f91632013a3d53abff19d8d9147 not found: ID does not exist" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.831578 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.840139 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.878373 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:56:08 crc kubenswrapper[4937]: E0123 06:56:08.878866 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41ceb1c-4607-42b4-ba84-07bd05554a7b" containerName="glance-log" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.878883 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41ceb1c-4607-42b4-ba84-07bd05554a7b" containerName="glance-log" Jan 23 06:56:08 crc kubenswrapper[4937]: E0123 06:56:08.878897 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41ceb1c-4607-42b4-ba84-07bd05554a7b" containerName="glance-httpd" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.878903 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41ceb1c-4607-42b4-ba84-07bd05554a7b" containerName="glance-httpd" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.879074 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41ceb1c-4607-42b4-ba84-07bd05554a7b" containerName="glance-log" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.879107 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41ceb1c-4607-42b4-ba84-07bd05554a7b" containerName="glance-httpd" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.880126 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.882159 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.883118 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 06:56:08 crc kubenswrapper[4937]: I0123 06:56:08.892933 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.052959 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7fe28e0b-21c6-4ae2-8878-698855b69670-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.053361 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-config-data\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.053417 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.053464 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-scripts\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.053520 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dthj\" (UniqueName: \"kubernetes.io/projected/7fe28e0b-21c6-4ae2-8878-698855b69670-kube-api-access-6dthj\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.053585 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.053631 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fe28e0b-21c6-4ae2-8878-698855b69670-logs\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.053708 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.088450 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jh76z" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.158274 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-config-data\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.163810 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.163919 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-scripts\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.164353 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dthj\" (UniqueName: \"kubernetes.io/projected/7fe28e0b-21c6-4ae2-8878-698855b69670-kube-api-access-6dthj\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.164469 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.164515 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fe28e0b-21c6-4ae2-8878-698855b69670-logs\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.164687 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.164735 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7fe28e0b-21c6-4ae2-8878-698855b69670-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.165265 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fe28e0b-21c6-4ae2-8878-698855b69670-logs\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.165289 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7fe28e0b-21c6-4ae2-8878-698855b69670-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.165995 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.168390 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.169374 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-config-data\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.176740 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-scripts\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.184377 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.193279 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dthj\" (UniqueName: \"kubernetes.io/projected/7fe28e0b-21c6-4ae2-8878-698855b69670-kube-api-access-6dthj\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.222704 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.232692 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.265519 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e83400c9-5cb0-40c8-907b-92e840794f92-operator-scripts\") pod \"e83400c9-5cb0-40c8-907b-92e840794f92\" (UID: \"e83400c9-5cb0-40c8-907b-92e840794f92\") " Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.267869 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9ngg\" (UniqueName: \"kubernetes.io/projected/e83400c9-5cb0-40c8-907b-92e840794f92-kube-api-access-b9ngg\") pod \"e83400c9-5cb0-40c8-907b-92e840794f92\" (UID: \"e83400c9-5cb0-40c8-907b-92e840794f92\") " Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.269334 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e83400c9-5cb0-40c8-907b-92e840794f92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e83400c9-5cb0-40c8-907b-92e840794f92" (UID: "e83400c9-5cb0-40c8-907b-92e840794f92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.274013 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e83400c9-5cb0-40c8-907b-92e840794f92-kube-api-access-b9ngg" (OuterVolumeSpecName: "kube-api-access-b9ngg") pod "e83400c9-5cb0-40c8-907b-92e840794f92" (UID: "e83400c9-5cb0-40c8-907b-92e840794f92"). InnerVolumeSpecName "kube-api-access-b9ngg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.371775 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e83400c9-5cb0-40c8-907b-92e840794f92-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.372035 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9ngg\" (UniqueName: \"kubernetes.io/projected/e83400c9-5cb0-40c8-907b-92e840794f92-kube-api-access-b9ngg\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.486670 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-634e-account-create-update-px24p" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.492066 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.492209 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.493019 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-77f756c697-sr9wm" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.509773 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.524180 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.547982 4937 generic.go:334] "Generic (PLEG): container finished" podID="db28c0b0-f910-42ce-a178-9d98da20f41c" containerID="3084410021de836b212c2c1ac6be13c55bcceef302cf270a1ebf827f6976d0dd" exitCode=0 Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.548014 4937 generic.go:334] "Generic (PLEG): container finished" podID="db28c0b0-f910-42ce-a178-9d98da20f41c" containerID="1e1a78b4c7fba764097cd313090d6143709d521e588bc8b23039735cfa959a52" exitCode=143 Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.548051 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db28c0b0-f910-42ce-a178-9d98da20f41c","Type":"ContainerDied","Data":"3084410021de836b212c2c1ac6be13c55bcceef302cf270a1ebf827f6976d0dd"} Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.548078 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db28c0b0-f910-42ce-a178-9d98da20f41c","Type":"ContainerDied","Data":"1e1a78b4c7fba764097cd313090d6143709d521e588bc8b23039735cfa959a52"} Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.548088 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db28c0b0-f910-42ce-a178-9d98da20f41c","Type":"ContainerDied","Data":"d805a18e3e8564641550d1861534bcc7ac2ba2da326c937f5b3548b29a68f3db"} Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.548103 4937 scope.go:117] "RemoveContainer" containerID="3084410021de836b212c2c1ac6be13c55bcceef302cf270a1ebf827f6976d0dd" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.548208 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.560152 4937 generic.go:334] "Generic (PLEG): container finished" podID="e27f321d-6357-47df-a057-31c9036f8ec4" containerID="85e7cc946a04e16ef5e3a9a6efdeff68a8d6dc0943a2a1fb6008655038676907" exitCode=0 Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.560277 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5647cc4676-mx7tx" event={"ID":"e27f321d-6357-47df-a057-31c9036f8ec4","Type":"ContainerDied","Data":"85e7cc946a04e16ef5e3a9a6efdeff68a8d6dc0943a2a1fb6008655038676907"} Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.569924 4937 generic.go:334] "Generic (PLEG): container finished" podID="d71c9df1-567e-4dd0-be98-fb63b23ebca7" containerID="f8615ffc4bfe0609ab386f7941cbb3d5f1c50fe34fdfba0598dda6d28e799390" exitCode=137 Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.570207 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8676986cc8-dkgvq" event={"ID":"d71c9df1-567e-4dd0-be98-fb63b23ebca7","Type":"ContainerDied","Data":"f8615ffc4bfe0609ab386f7941cbb3d5f1c50fe34fdfba0598dda6d28e799390"} Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.570234 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8676986cc8-dkgvq" event={"ID":"d71c9df1-567e-4dd0-be98-fb63b23ebca7","Type":"ContainerDied","Data":"a2997b909fc6a32649ea18b015b3ecceb30f95e9d1a41b569e00f65cdb24aeb4"} Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.570315 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8676986cc8-dkgvq" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.577247 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa-operator-scripts\") pod \"c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa\" (UID: \"c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa\") " Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.577302 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r64vb\" (UniqueName: \"kubernetes.io/projected/c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa-kube-api-access-r64vb\") pod \"c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa\" (UID: \"c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa\") " Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.577931 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa" (UID: "c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.578204 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.578278 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-634e-account-create-update-px24p" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.578319 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-634e-account-create-update-px24p" event={"ID":"c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa","Type":"ContainerDied","Data":"034c85c301fb5f0164f8a583d98a4c10636c88435472ff2ed69d48795643c73e"} Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.578376 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="034c85c301fb5f0164f8a583d98a4c10636c88435472ff2ed69d48795643c73e" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.583546 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jh76z" event={"ID":"e83400c9-5cb0-40c8-907b-92e840794f92","Type":"ContainerDied","Data":"5901315c3b49a53318da94a4b03bf39766b41b78d0e6b56e546736b5a82ab428"} Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.583576 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5901315c3b49a53318da94a4b03bf39766b41b78d0e6b56e546736b5a82ab428" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.583650 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jh76z" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.629987 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa-kube-api-access-r64vb" (OuterVolumeSpecName: "kube-api-access-r64vb") pod "c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa" (UID: "c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa"). InnerVolumeSpecName "kube-api-access-r64vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.655789 4937 scope.go:117] "RemoveContainer" containerID="1e1a78b4c7fba764097cd313090d6143709d521e588bc8b23039735cfa959a52" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.680345 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"db28c0b0-f910-42ce-a178-9d98da20f41c\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.680391 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d71c9df1-567e-4dd0-be98-fb63b23ebca7-horizon-secret-key\") pod \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.680439 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db28c0b0-f910-42ce-a178-9d98da20f41c-config-data\") pod \"db28c0b0-f910-42ce-a178-9d98da20f41c\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.680458 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9v42\" (UniqueName: \"kubernetes.io/projected/db28c0b0-f910-42ce-a178-9d98da20f41c-kube-api-access-v9v42\") pod \"db28c0b0-f910-42ce-a178-9d98da20f41c\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.680529 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d71c9df1-567e-4dd0-be98-fb63b23ebca7-logs\") pod \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.680544 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d71c9df1-567e-4dd0-be98-fb63b23ebca7-config-data\") pod \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.680609 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db28c0b0-f910-42ce-a178-9d98da20f41c-httpd-run\") pod \"db28c0b0-f910-42ce-a178-9d98da20f41c\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.680793 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db28c0b0-f910-42ce-a178-9d98da20f41c-scripts\") pod \"db28c0b0-f910-42ce-a178-9d98da20f41c\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.680835 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db28c0b0-f910-42ce-a178-9d98da20f41c-combined-ca-bundle\") pod \"db28c0b0-f910-42ce-a178-9d98da20f41c\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.680870 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db28c0b0-f910-42ce-a178-9d98da20f41c-logs\") pod \"db28c0b0-f910-42ce-a178-9d98da20f41c\" (UID: \"db28c0b0-f910-42ce-a178-9d98da20f41c\") " Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.680887 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d71c9df1-567e-4dd0-be98-fb63b23ebca7-scripts\") pod \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.680918 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71c9df1-567e-4dd0-be98-fb63b23ebca7-combined-ca-bundle\") pod \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.680976 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d71c9df1-567e-4dd0-be98-fb63b23ebca7-horizon-tls-certs\") pod \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.680999 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6ph2\" (UniqueName: \"kubernetes.io/projected/d71c9df1-567e-4dd0-be98-fb63b23ebca7-kube-api-access-q6ph2\") pod \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\" (UID: \"d71c9df1-567e-4dd0-be98-fb63b23ebca7\") " Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.681488 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r64vb\" (UniqueName: \"kubernetes.io/projected/c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa-kube-api-access-r64vb\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.686552 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db28c0b0-f910-42ce-a178-9d98da20f41c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "db28c0b0-f910-42ce-a178-9d98da20f41c" (UID: "db28c0b0-f910-42ce-a178-9d98da20f41c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.686916 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d71c9df1-567e-4dd0-be98-fb63b23ebca7-logs" (OuterVolumeSpecName: "logs") pod "d71c9df1-567e-4dd0-be98-fb63b23ebca7" (UID: "d71c9df1-567e-4dd0-be98-fb63b23ebca7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.690378 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db28c0b0-f910-42ce-a178-9d98da20f41c-logs" (OuterVolumeSpecName: "logs") pod "db28c0b0-f910-42ce-a178-9d98da20f41c" (UID: "db28c0b0-f910-42ce-a178-9d98da20f41c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.693247 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d71c9df1-567e-4dd0-be98-fb63b23ebca7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d71c9df1-567e-4dd0-be98-fb63b23ebca7" (UID: "d71c9df1-567e-4dd0-be98-fb63b23ebca7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.697272 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "db28c0b0-f910-42ce-a178-9d98da20f41c" (UID: "db28c0b0-f910-42ce-a178-9d98da20f41c"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.703980 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db28c0b0-f910-42ce-a178-9d98da20f41c-kube-api-access-v9v42" (OuterVolumeSpecName: "kube-api-access-v9v42") pod "db28c0b0-f910-42ce-a178-9d98da20f41c" (UID: "db28c0b0-f910-42ce-a178-9d98da20f41c"). InnerVolumeSpecName "kube-api-access-v9v42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.715893 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d71c9df1-567e-4dd0-be98-fb63b23ebca7-kube-api-access-q6ph2" (OuterVolumeSpecName: "kube-api-access-q6ph2") pod "d71c9df1-567e-4dd0-be98-fb63b23ebca7" (UID: "d71c9df1-567e-4dd0-be98-fb63b23ebca7"). InnerVolumeSpecName "kube-api-access-q6ph2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.729807 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db28c0b0-f910-42ce-a178-9d98da20f41c-scripts" (OuterVolumeSpecName: "scripts") pod "db28c0b0-f910-42ce-a178-9d98da20f41c" (UID: "db28c0b0-f910-42ce-a178-9d98da20f41c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.759973 4937 scope.go:117] "RemoveContainer" containerID="3084410021de836b212c2c1ac6be13c55bcceef302cf270a1ebf827f6976d0dd" Jan 23 06:56:09 crc kubenswrapper[4937]: E0123 06:56:09.762737 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3084410021de836b212c2c1ac6be13c55bcceef302cf270a1ebf827f6976d0dd\": container with ID starting with 3084410021de836b212c2c1ac6be13c55bcceef302cf270a1ebf827f6976d0dd not found: ID does not exist" containerID="3084410021de836b212c2c1ac6be13c55bcceef302cf270a1ebf827f6976d0dd" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.762906 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3084410021de836b212c2c1ac6be13c55bcceef302cf270a1ebf827f6976d0dd"} err="failed to get container status \"3084410021de836b212c2c1ac6be13c55bcceef302cf270a1ebf827f6976d0dd\": rpc error: code = NotFound desc = could not find container \"3084410021de836b212c2c1ac6be13c55bcceef302cf270a1ebf827f6976d0dd\": container with ID starting with 3084410021de836b212c2c1ac6be13c55bcceef302cf270a1ebf827f6976d0dd not found: ID does not exist" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.762930 4937 scope.go:117] "RemoveContainer" containerID="1e1a78b4c7fba764097cd313090d6143709d521e588bc8b23039735cfa959a52" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.766920 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db28c0b0-f910-42ce-a178-9d98da20f41c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db28c0b0-f910-42ce-a178-9d98da20f41c" (UID: "db28c0b0-f910-42ce-a178-9d98da20f41c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:09 crc kubenswrapper[4937]: E0123 06:56:09.767029 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e1a78b4c7fba764097cd313090d6143709d521e588bc8b23039735cfa959a52\": container with ID starting with 1e1a78b4c7fba764097cd313090d6143709d521e588bc8b23039735cfa959a52 not found: ID does not exist" containerID="1e1a78b4c7fba764097cd313090d6143709d521e588bc8b23039735cfa959a52" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.767079 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e1a78b4c7fba764097cd313090d6143709d521e588bc8b23039735cfa959a52"} err="failed to get container status \"1e1a78b4c7fba764097cd313090d6143709d521e588bc8b23039735cfa959a52\": rpc error: code = NotFound desc = could not find container \"1e1a78b4c7fba764097cd313090d6143709d521e588bc8b23039735cfa959a52\": container with ID starting with 1e1a78b4c7fba764097cd313090d6143709d521e588bc8b23039735cfa959a52 not found: ID does not exist" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.767112 4937 scope.go:117] "RemoveContainer" containerID="3084410021de836b212c2c1ac6be13c55bcceef302cf270a1ebf827f6976d0dd" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.777654 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3084410021de836b212c2c1ac6be13c55bcceef302cf270a1ebf827f6976d0dd"} err="failed to get container status \"3084410021de836b212c2c1ac6be13c55bcceef302cf270a1ebf827f6976d0dd\": rpc error: code = NotFound desc = could not find container \"3084410021de836b212c2c1ac6be13c55bcceef302cf270a1ebf827f6976d0dd\": container with ID starting with 3084410021de836b212c2c1ac6be13c55bcceef302cf270a1ebf827f6976d0dd not found: ID does not exist" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.777702 4937 scope.go:117] "RemoveContainer" containerID="1e1a78b4c7fba764097cd313090d6143709d521e588bc8b23039735cfa959a52" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.785940 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db28c0b0-f910-42ce-a178-9d98da20f41c-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.785982 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db28c0b0-f910-42ce-a178-9d98da20f41c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.785992 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db28c0b0-f910-42ce-a178-9d98da20f41c-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.786001 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6ph2\" (UniqueName: \"kubernetes.io/projected/d71c9df1-567e-4dd0-be98-fb63b23ebca7-kube-api-access-q6ph2\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.786023 4937 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.786031 4937 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d71c9df1-567e-4dd0-be98-fb63b23ebca7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.786042 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9v42\" (UniqueName: \"kubernetes.io/projected/db28c0b0-f910-42ce-a178-9d98da20f41c-kube-api-access-v9v42\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.786050 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d71c9df1-567e-4dd0-be98-fb63b23ebca7-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.786058 4937 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db28c0b0-f910-42ce-a178-9d98da20f41c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.786811 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e1a78b4c7fba764097cd313090d6143709d521e588bc8b23039735cfa959a52"} err="failed to get container status \"1e1a78b4c7fba764097cd313090d6143709d521e588bc8b23039735cfa959a52\": rpc error: code = NotFound desc = could not find container \"1e1a78b4c7fba764097cd313090d6143709d521e588bc8b23039735cfa959a52\": container with ID starting with 1e1a78b4c7fba764097cd313090d6143709d521e588bc8b23039735cfa959a52 not found: ID does not exist" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.786858 4937 scope.go:117] "RemoveContainer" containerID="621835b9c002353487666efebf8df80797fcdb73c9de4fbd3288c923e4dcede6" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.829739 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d71c9df1-567e-4dd0-be98-fb63b23ebca7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d71c9df1-567e-4dd0-be98-fb63b23ebca7" (UID: "d71c9df1-567e-4dd0-be98-fb63b23ebca7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.838247 4937 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.888124 4937 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.888144 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d71c9df1-567e-4dd0-be98-fb63b23ebca7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.908392 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d71c9df1-567e-4dd0-be98-fb63b23ebca7-scripts" (OuterVolumeSpecName: "scripts") pod "d71c9df1-567e-4dd0-be98-fb63b23ebca7" (UID: "d71c9df1-567e-4dd0-be98-fb63b23ebca7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.911329 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d71c9df1-567e-4dd0-be98-fb63b23ebca7-config-data" (OuterVolumeSpecName: "config-data") pod "d71c9df1-567e-4dd0-be98-fb63b23ebca7" (UID: "d71c9df1-567e-4dd0-be98-fb63b23ebca7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.918828 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db28c0b0-f910-42ce-a178-9d98da20f41c-config-data" (OuterVolumeSpecName: "config-data") pod "db28c0b0-f910-42ce-a178-9d98da20f41c" (UID: "db28c0b0-f910-42ce-a178-9d98da20f41c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.925552 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d71c9df1-567e-4dd0-be98-fb63b23ebca7-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "d71c9df1-567e-4dd0-be98-fb63b23ebca7" (UID: "d71c9df1-567e-4dd0-be98-fb63b23ebca7"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.994092 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d71c9df1-567e-4dd0-be98-fb63b23ebca7-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.994120 4937 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d71c9df1-567e-4dd0-be98-fb63b23ebca7-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.994130 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db28c0b0-f910-42ce-a178-9d98da20f41c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:09 crc kubenswrapper[4937]: I0123 06:56:09.994139 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d71c9df1-567e-4dd0-be98-fb63b23ebca7-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.065963 4937 scope.go:117] "RemoveContainer" containerID="f8615ffc4bfe0609ab386f7941cbb3d5f1c50fe34fdfba0598dda6d28e799390" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.090154 4937 scope.go:117] "RemoveContainer" containerID="621835b9c002353487666efebf8df80797fcdb73c9de4fbd3288c923e4dcede6" Jan 23 06:56:10 crc kubenswrapper[4937]: E0123 06:56:10.090721 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"621835b9c002353487666efebf8df80797fcdb73c9de4fbd3288c923e4dcede6\": container with ID starting with 621835b9c002353487666efebf8df80797fcdb73c9de4fbd3288c923e4dcede6 not found: ID does not exist" containerID="621835b9c002353487666efebf8df80797fcdb73c9de4fbd3288c923e4dcede6" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.090779 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"621835b9c002353487666efebf8df80797fcdb73c9de4fbd3288c923e4dcede6"} err="failed to get container status \"621835b9c002353487666efebf8df80797fcdb73c9de4fbd3288c923e4dcede6\": rpc error: code = NotFound desc = could not find container \"621835b9c002353487666efebf8df80797fcdb73c9de4fbd3288c923e4dcede6\": container with ID starting with 621835b9c002353487666efebf8df80797fcdb73c9de4fbd3288c923e4dcede6 not found: ID does not exist" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.090804 4937 scope.go:117] "RemoveContainer" containerID="f8615ffc4bfe0609ab386f7941cbb3d5f1c50fe34fdfba0598dda6d28e799390" Jan 23 06:56:10 crc kubenswrapper[4937]: E0123 06:56:10.091207 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8615ffc4bfe0609ab386f7941cbb3d5f1c50fe34fdfba0598dda6d28e799390\": container with ID starting with f8615ffc4bfe0609ab386f7941cbb3d5f1c50fe34fdfba0598dda6d28e799390 not found: ID does not exist" containerID="f8615ffc4bfe0609ab386f7941cbb3d5f1c50fe34fdfba0598dda6d28e799390" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.091238 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8615ffc4bfe0609ab386f7941cbb3d5f1c50fe34fdfba0598dda6d28e799390"} err="failed to get container status \"f8615ffc4bfe0609ab386f7941cbb3d5f1c50fe34fdfba0598dda6d28e799390\": rpc error: code = NotFound desc = could not find container \"f8615ffc4bfe0609ab386f7941cbb3d5f1c50fe34fdfba0598dda6d28e799390\": container with ID starting with f8615ffc4bfe0609ab386f7941cbb3d5f1c50fe34fdfba0598dda6d28e799390 not found: ID does not exist" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.139399 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.210984 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e27f321d-6357-47df-a057-31c9036f8ec4-config-data-custom\") pod \"e27f321d-6357-47df-a057-31c9036f8ec4\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.211225 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27f321d-6357-47df-a057-31c9036f8ec4-combined-ca-bundle\") pod \"e27f321d-6357-47df-a057-31c9036f8ec4\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.211297 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27f321d-6357-47df-a057-31c9036f8ec4-config-data\") pod \"e27f321d-6357-47df-a057-31c9036f8ec4\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.211426 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e27f321d-6357-47df-a057-31c9036f8ec4-logs\") pod \"e27f321d-6357-47df-a057-31c9036f8ec4\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.211490 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs6c6\" (UniqueName: \"kubernetes.io/projected/e27f321d-6357-47df-a057-31c9036f8ec4-kube-api-access-cs6c6\") pod \"e27f321d-6357-47df-a057-31c9036f8ec4\" (UID: \"e27f321d-6357-47df-a057-31c9036f8ec4\") " Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.240129 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27f321d-6357-47df-a057-31c9036f8ec4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e27f321d-6357-47df-a057-31c9036f8ec4" (UID: "e27f321d-6357-47df-a057-31c9036f8ec4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.242548 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e27f321d-6357-47df-a057-31c9036f8ec4-logs" (OuterVolumeSpecName: "logs") pod "e27f321d-6357-47df-a057-31c9036f8ec4" (UID: "e27f321d-6357-47df-a057-31c9036f8ec4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.246856 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27f321d-6357-47df-a057-31c9036f8ec4-kube-api-access-cs6c6" (OuterVolumeSpecName: "kube-api-access-cs6c6") pod "e27f321d-6357-47df-a057-31c9036f8ec4" (UID: "e27f321d-6357-47df-a057-31c9036f8ec4"). InnerVolumeSpecName "kube-api-access-cs6c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.312916 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.317698 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs6c6\" (UniqueName: \"kubernetes.io/projected/e27f321d-6357-47df-a057-31c9036f8ec4-kube-api-access-cs6c6\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.317726 4937 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e27f321d-6357-47df-a057-31c9036f8ec4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.317736 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e27f321d-6357-47df-a057-31c9036f8ec4-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.320764 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27f321d-6357-47df-a057-31c9036f8ec4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e27f321d-6357-47df-a057-31c9036f8ec4" (UID: "e27f321d-6357-47df-a057-31c9036f8ec4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.410722 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.419986 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e27f321d-6357-47df-a057-31c9036f8ec4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.438064 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:56:10 crc kubenswrapper[4937]: E0123 06:56:10.441003 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db28c0b0-f910-42ce-a178-9d98da20f41c" containerName="glance-httpd" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.441139 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="db28c0b0-f910-42ce-a178-9d98da20f41c" containerName="glance-httpd" Jan 23 06:56:10 crc kubenswrapper[4937]: E0123 06:56:10.441267 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d71c9df1-567e-4dd0-be98-fb63b23ebca7" containerName="horizon-log" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.441325 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d71c9df1-567e-4dd0-be98-fb63b23ebca7" containerName="horizon-log" Jan 23 06:56:10 crc kubenswrapper[4937]: E0123 06:56:10.441381 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27f321d-6357-47df-a057-31c9036f8ec4" containerName="barbican-api-log" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.441440 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27f321d-6357-47df-a057-31c9036f8ec4" containerName="barbican-api-log" Jan 23 06:56:10 crc kubenswrapper[4937]: E0123 06:56:10.441955 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d71c9df1-567e-4dd0-be98-fb63b23ebca7" containerName="horizon" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.442347 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d71c9df1-567e-4dd0-be98-fb63b23ebca7" containerName="horizon" Jan 23 06:56:10 crc kubenswrapper[4937]: E0123 06:56:10.442430 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27f321d-6357-47df-a057-31c9036f8ec4" containerName="barbican-api" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.442488 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27f321d-6357-47df-a057-31c9036f8ec4" containerName="barbican-api" Jan 23 06:56:10 crc kubenswrapper[4937]: E0123 06:56:10.442634 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e83400c9-5cb0-40c8-907b-92e840794f92" containerName="mariadb-database-create" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.442697 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="e83400c9-5cb0-40c8-907b-92e840794f92" containerName="mariadb-database-create" Jan 23 06:56:10 crc kubenswrapper[4937]: E0123 06:56:10.442751 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa" containerName="mariadb-account-create-update" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.442807 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa" containerName="mariadb-account-create-update" Jan 23 06:56:10 crc kubenswrapper[4937]: E0123 06:56:10.442869 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db28c0b0-f910-42ce-a178-9d98da20f41c" containerName="glance-log" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.442919 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="db28c0b0-f910-42ce-a178-9d98da20f41c" containerName="glance-log" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.443190 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27f321d-6357-47df-a057-31c9036f8ec4" containerName="barbican-api-log" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.443764 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa" containerName="mariadb-account-create-update" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.443834 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="db28c0b0-f910-42ce-a178-9d98da20f41c" containerName="glance-log" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.443905 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="db28c0b0-f910-42ce-a178-9d98da20f41c" containerName="glance-httpd" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.443974 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="d71c9df1-567e-4dd0-be98-fb63b23ebca7" containerName="horizon-log" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.444041 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27f321d-6357-47df-a057-31c9036f8ec4" containerName="barbican-api" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.444103 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="e83400c9-5cb0-40c8-907b-92e840794f92" containerName="mariadb-database-create" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.444157 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="d71c9df1-567e-4dd0-be98-fb63b23ebca7" containerName="horizon" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.445636 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.451513 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.452836 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.456297 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.515484 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e27f321d-6357-47df-a057-31c9036f8ec4-config-data" (OuterVolumeSpecName: "config-data") pod "e27f321d-6357-47df-a057-31c9036f8ec4" (UID: "e27f321d-6357-47df-a057-31c9036f8ec4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.525249 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08017a1f-9b75-4907-8e99-fbec667904c2-logs\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.525305 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.525338 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.525386 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.525445 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08017a1f-9b75-4907-8e99-fbec667904c2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.525484 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r972\" (UniqueName: \"kubernetes.io/projected/08017a1f-9b75-4907-8e99-fbec667904c2-kube-api-access-4r972\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.525532 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.525703 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.525768 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e27f321d-6357-47df-a057-31c9036f8ec4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.547061 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db28c0b0-f910-42ce-a178-9d98da20f41c" path="/var/lib/kubelet/pods/db28c0b0-f910-42ce-a178-9d98da20f41c/volumes" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.547975 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e41ceb1c-4607-42b4-ba84-07bd05554a7b" path="/var/lib/kubelet/pods/e41ceb1c-4607-42b4-ba84-07bd05554a7b/volumes" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.576748 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6157-account-create-update-8fwxn" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.590501 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8676986cc8-dkgvq"] Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.609939 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-827z7" event={"ID":"da4d3482-c2df-46d9-88ad-d8199d012375","Type":"ContainerDied","Data":"783fb03b29d92c4aa5cfc1e4ffe81b806f49197b5cd7e5489a69b885e07ecb18"} Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.609972 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="783fb03b29d92c4aa5cfc1e4ffe81b806f49197b5cd7e5489a69b885e07ecb18" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.616353 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-8676986cc8-dkgvq"] Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.617959 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6157-account-create-update-8fwxn" event={"ID":"6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be","Type":"ContainerDied","Data":"b2319eb96ea68247c773640c0c02ce78fd9b6058e3b37fe04dcb225e43365a91"} Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.617988 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2319eb96ea68247c773640c0c02ce78fd9b6058e3b37fe04dcb225e43365a91" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.618041 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6157-account-create-update-8fwxn" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.618439 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-827z7" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.626824 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be-operator-scripts\") pod \"6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be\" (UID: \"6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be\") " Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.626936 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qglcl\" (UniqueName: \"kubernetes.io/projected/6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be-kube-api-access-qglcl\") pod \"6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be\" (UID: \"6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be\") " Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.627355 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.627381 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08017a1f-9b75-4907-8e99-fbec667904c2-logs\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.627398 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.627422 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.627454 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.627490 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08017a1f-9b75-4907-8e99-fbec667904c2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.627513 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r972\" (UniqueName: \"kubernetes.io/projected/08017a1f-9b75-4907-8e99-fbec667904c2-kube-api-access-4r972\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.627541 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.627549 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be" (UID: "6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.628574 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.629320 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vg7tb" event={"ID":"93b93d08-1c46-4589-bb41-3a6353b03d7f","Type":"ContainerDied","Data":"719adc251aadce7bd2b411b1252a419e44fdf4f13f67efa1907dfdb73b3b9ec3"} Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.629351 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="719adc251aadce7bd2b411b1252a419e44fdf4f13f67efa1907dfdb73b3b9ec3" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.629425 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08017a1f-9b75-4907-8e99-fbec667904c2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.629913 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08017a1f-9b75-4907-8e99-fbec667904c2-logs\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.633516 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.633799 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.639870 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ab28-account-create-update-vqsqw" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.640244 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be-kube-api-access-qglcl" (OuterVolumeSpecName: "kube-api-access-qglcl") pod "6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be" (UID: "6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be"). InnerVolumeSpecName "kube-api-access-qglcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.641874 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ab28-account-create-update-vqsqw" event={"ID":"a931af15-27d9-46f8-ab92-0af812ed5cd5","Type":"ContainerDied","Data":"b7deb6c33b550b41c0cac9824e5078afd155805e354f9873054ec424da3a6d57"} Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.641917 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7deb6c33b550b41c0cac9824e5078afd155805e354f9873054ec424da3a6d57" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.645524 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5647cc4676-mx7tx" event={"ID":"e27f321d-6357-47df-a057-31c9036f8ec4","Type":"ContainerDied","Data":"aec06672ed69fa64935a46e15b8e7e48f48e8d519701b1d7f4be7aafaab86510"} Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.645565 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5647cc4676-mx7tx" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.645572 4937 scope.go:117] "RemoveContainer" containerID="85e7cc946a04e16ef5e3a9a6efdeff68a8d6dc0943a2a1fb6008655038676907" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.648007 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vg7tb" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.651517 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.651713 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.656160 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r972\" (UniqueName: \"kubernetes.io/projected/08017a1f-9b75-4907-8e99-fbec667904c2-kube-api-access-4r972\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.693001 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5647cc4676-mx7tx"] Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.696573 4937 scope.go:117] "RemoveContainer" containerID="a68c23b26f0f48cd1c8dbe58b94e630124c9b0b9b4da18edbd4e8aeb36862718" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.713814 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5647cc4676-mx7tx"] Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.729369 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nhmf\" (UniqueName: \"kubernetes.io/projected/93b93d08-1c46-4589-bb41-3a6353b03d7f-kube-api-access-7nhmf\") pod \"93b93d08-1c46-4589-bb41-3a6353b03d7f\" (UID: \"93b93d08-1c46-4589-bb41-3a6353b03d7f\") " Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.729464 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93b93d08-1c46-4589-bb41-3a6353b03d7f-operator-scripts\") pod \"93b93d08-1c46-4589-bb41-3a6353b03d7f\" (UID: \"93b93d08-1c46-4589-bb41-3a6353b03d7f\") " Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.730765 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl79k\" (UniqueName: \"kubernetes.io/projected/da4d3482-c2df-46d9-88ad-d8199d012375-kube-api-access-wl79k\") pod \"da4d3482-c2df-46d9-88ad-d8199d012375\" (UID: \"da4d3482-c2df-46d9-88ad-d8199d012375\") " Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.730926 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a931af15-27d9-46f8-ab92-0af812ed5cd5-operator-scripts\") pod \"a931af15-27d9-46f8-ab92-0af812ed5cd5\" (UID: \"a931af15-27d9-46f8-ab92-0af812ed5cd5\") " Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.730985 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da4d3482-c2df-46d9-88ad-d8199d012375-operator-scripts\") pod \"da4d3482-c2df-46d9-88ad-d8199d012375\" (UID: \"da4d3482-c2df-46d9-88ad-d8199d012375\") " Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.731100 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dldqj\" (UniqueName: \"kubernetes.io/projected/a931af15-27d9-46f8-ab92-0af812ed5cd5-kube-api-access-dldqj\") pod \"a931af15-27d9-46f8-ab92-0af812ed5cd5\" (UID: \"a931af15-27d9-46f8-ab92-0af812ed5cd5\") " Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.731928 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.731965 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qglcl\" (UniqueName: \"kubernetes.io/projected/6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be-kube-api-access-qglcl\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.735469 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da4d3482-c2df-46d9-88ad-d8199d012375-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da4d3482-c2df-46d9-88ad-d8199d012375" (UID: "da4d3482-c2df-46d9-88ad-d8199d012375"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.736534 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a931af15-27d9-46f8-ab92-0af812ed5cd5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a931af15-27d9-46f8-ab92-0af812ed5cd5" (UID: "a931af15-27d9-46f8-ab92-0af812ed5cd5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.740008 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93b93d08-1c46-4589-bb41-3a6353b03d7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93b93d08-1c46-4589-bb41-3a6353b03d7f" (UID: "93b93d08-1c46-4589-bb41-3a6353b03d7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.743264 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93b93d08-1c46-4589-bb41-3a6353b03d7f-kube-api-access-7nhmf" (OuterVolumeSpecName: "kube-api-access-7nhmf") pod "93b93d08-1c46-4589-bb41-3a6353b03d7f" (UID: "93b93d08-1c46-4589-bb41-3a6353b03d7f"). InnerVolumeSpecName "kube-api-access-7nhmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.744247 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a931af15-27d9-46f8-ab92-0af812ed5cd5-kube-api-access-dldqj" (OuterVolumeSpecName: "kube-api-access-dldqj") pod "a931af15-27d9-46f8-ab92-0af812ed5cd5" (UID: "a931af15-27d9-46f8-ab92-0af812ed5cd5"). InnerVolumeSpecName "kube-api-access-dldqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.746638 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da4d3482-c2df-46d9-88ad-d8199d012375-kube-api-access-wl79k" (OuterVolumeSpecName: "kube-api-access-wl79k") pod "da4d3482-c2df-46d9-88ad-d8199d012375" (UID: "da4d3482-c2df-46d9-88ad-d8199d012375"). InnerVolumeSpecName "kube-api-access-wl79k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.749513 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.755701 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.834349 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dldqj\" (UniqueName: \"kubernetes.io/projected/a931af15-27d9-46f8-ab92-0af812ed5cd5-kube-api-access-dldqj\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.834546 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nhmf\" (UniqueName: \"kubernetes.io/projected/93b93d08-1c46-4589-bb41-3a6353b03d7f-kube-api-access-7nhmf\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.834626 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93b93d08-1c46-4589-bb41-3a6353b03d7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.834734 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl79k\" (UniqueName: \"kubernetes.io/projected/da4d3482-c2df-46d9-88ad-d8199d012375-kube-api-access-wl79k\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.834798 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a931af15-27d9-46f8-ab92-0af812ed5cd5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.834862 4937 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da4d3482-c2df-46d9-88ad-d8199d012375-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:10 crc kubenswrapper[4937]: I0123 06:56:10.913138 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:11 crc kubenswrapper[4937]: I0123 06:56:11.459032 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:56:11 crc kubenswrapper[4937]: W0123 06:56:11.479611 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08017a1f_9b75_4907_8e99_fbec667904c2.slice/crio-013e2d90e79c3ba40834fd71084f277513e35b08baa3f2cc39949f11efa1a004 WatchSource:0}: Error finding container 013e2d90e79c3ba40834fd71084f277513e35b08baa3f2cc39949f11efa1a004: Status 404 returned error can't find the container with id 013e2d90e79c3ba40834fd71084f277513e35b08baa3f2cc39949f11efa1a004 Jan 23 06:56:11 crc kubenswrapper[4937]: I0123 06:56:11.682409 4937 generic.go:334] "Generic (PLEG): container finished" podID="d82e186e-8995-43ff-a65f-a1918f8495bf" containerID="bfccc14221b0d9a4f1750c7e8735fe2845ed32f2544207e4650092452433a2ed" exitCode=137 Jan 23 06:56:11 crc kubenswrapper[4937]: I0123 06:56:11.682478 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d82e186e-8995-43ff-a65f-a1918f8495bf","Type":"ContainerDied","Data":"bfccc14221b0d9a4f1750c7e8735fe2845ed32f2544207e4650092452433a2ed"} Jan 23 06:56:11 crc kubenswrapper[4937]: I0123 06:56:11.687356 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08017a1f-9b75-4907-8e99-fbec667904c2","Type":"ContainerStarted","Data":"013e2d90e79c3ba40834fd71084f277513e35b08baa3f2cc39949f11efa1a004"} Jan 23 06:56:11 crc kubenswrapper[4937]: I0123 06:56:11.697645 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-827z7" Jan 23 06:56:11 crc kubenswrapper[4937]: I0123 06:56:11.697905 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vg7tb" Jan 23 06:56:11 crc kubenswrapper[4937]: I0123 06:56:11.697934 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ab28-account-create-update-vqsqw" Jan 23 06:56:11 crc kubenswrapper[4937]: I0123 06:56:11.697835 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7fe28e0b-21c6-4ae2-8878-698855b69670","Type":"ContainerStarted","Data":"d51134ae252d4dc28fe019eae28859af66c853925683765ec8acced4b7ce9971"} Jan 23 06:56:11 crc kubenswrapper[4937]: I0123 06:56:11.887799 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:56:11 crc kubenswrapper[4937]: I0123 06:56:11.972990 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9c86dc6f7-zv9mp"] Jan 23 06:56:11 crc kubenswrapper[4937]: I0123 06:56:11.973454 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" podUID="102f97ec-9a78-4b42-8fbf-4ece2671905e" containerName="dnsmasq-dns" containerID="cri-o://c7f213f367ca15e61f13e7b25bc9b932e54424a196a8a8633af88802ff62f8a8" gracePeriod=10 Jan 23 06:56:12 crc kubenswrapper[4937]: I0123 06:56:12.537117 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d71c9df1-567e-4dd0-be98-fb63b23ebca7" path="/var/lib/kubelet/pods/d71c9df1-567e-4dd0-be98-fb63b23ebca7/volumes" Jan 23 06:56:12 crc kubenswrapper[4937]: I0123 06:56:12.537748 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e27f321d-6357-47df-a057-31c9036f8ec4" path="/var/lib/kubelet/pods/e27f321d-6357-47df-a057-31c9036f8ec4/volumes" Jan 23 06:56:13 crc kubenswrapper[4937]: I0123 06:56:13.730122 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7fe28e0b-21c6-4ae2-8878-698855b69670","Type":"ContainerStarted","Data":"70d79d155572853e90860dd87fabc9db4f96de74fcce0fbc7e853ca1d2e59d91"} Jan 23 06:56:13 crc kubenswrapper[4937]: I0123 06:56:13.734407 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08017a1f-9b75-4907-8e99-fbec667904c2","Type":"ContainerStarted","Data":"ebcd2bf40ed446a7c2f13c548704415cf5ae69e0cb8dc64875bcde8813450de8"} Jan 23 06:56:13 crc kubenswrapper[4937]: I0123 06:56:13.736709 4937 generic.go:334] "Generic (PLEG): container finished" podID="102f97ec-9a78-4b42-8fbf-4ece2671905e" containerID="c7f213f367ca15e61f13e7b25bc9b932e54424a196a8a8633af88802ff62f8a8" exitCode=0 Jan 23 06:56:13 crc kubenswrapper[4937]: I0123 06:56:13.736741 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" event={"ID":"102f97ec-9a78-4b42-8fbf-4ece2671905e","Type":"ContainerDied","Data":"c7f213f367ca15e61f13e7b25bc9b932e54424a196a8a8633af88802ff62f8a8"} Jan 23 06:56:13 crc kubenswrapper[4937]: I0123 06:56:13.994179 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.109879 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d82e186e-8995-43ff-a65f-a1918f8495bf-log-httpd\") pod \"d82e186e-8995-43ff-a65f-a1918f8495bf\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.110178 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-sg-core-conf-yaml\") pod \"d82e186e-8995-43ff-a65f-a1918f8495bf\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.110378 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d82e186e-8995-43ff-a65f-a1918f8495bf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d82e186e-8995-43ff-a65f-a1918f8495bf" (UID: "d82e186e-8995-43ff-a65f-a1918f8495bf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.110543 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf8qk\" (UniqueName: \"kubernetes.io/projected/d82e186e-8995-43ff-a65f-a1918f8495bf-kube-api-access-xf8qk\") pod \"d82e186e-8995-43ff-a65f-a1918f8495bf\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.110655 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-combined-ca-bundle\") pod \"d82e186e-8995-43ff-a65f-a1918f8495bf\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.110682 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-scripts\") pod \"d82e186e-8995-43ff-a65f-a1918f8495bf\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.110707 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d82e186e-8995-43ff-a65f-a1918f8495bf-run-httpd\") pod \"d82e186e-8995-43ff-a65f-a1918f8495bf\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.110747 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-config-data\") pod \"d82e186e-8995-43ff-a65f-a1918f8495bf\" (UID: \"d82e186e-8995-43ff-a65f-a1918f8495bf\") " Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.111317 4937 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d82e186e-8995-43ff-a65f-a1918f8495bf-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.112653 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d82e186e-8995-43ff-a65f-a1918f8495bf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d82e186e-8995-43ff-a65f-a1918f8495bf" (UID: "d82e186e-8995-43ff-a65f-a1918f8495bf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.117382 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d82e186e-8995-43ff-a65f-a1918f8495bf-kube-api-access-xf8qk" (OuterVolumeSpecName: "kube-api-access-xf8qk") pod "d82e186e-8995-43ff-a65f-a1918f8495bf" (UID: "d82e186e-8995-43ff-a65f-a1918f8495bf"). InnerVolumeSpecName "kube-api-access-xf8qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.117681 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-scripts" (OuterVolumeSpecName: "scripts") pod "d82e186e-8995-43ff-a65f-a1918f8495bf" (UID: "d82e186e-8995-43ff-a65f-a1918f8495bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.151843 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d82e186e-8995-43ff-a65f-a1918f8495bf" (UID: "d82e186e-8995-43ff-a65f-a1918f8495bf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.176281 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d82e186e-8995-43ff-a65f-a1918f8495bf" (UID: "d82e186e-8995-43ff-a65f-a1918f8495bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.212939 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.212967 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.212976 4937 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d82e186e-8995-43ff-a65f-a1918f8495bf-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.212984 4937 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.212993 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf8qk\" (UniqueName: \"kubernetes.io/projected/d82e186e-8995-43ff-a65f-a1918f8495bf-kube-api-access-xf8qk\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.216893 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-config-data" (OuterVolumeSpecName: "config-data") pod "d82e186e-8995-43ff-a65f-a1918f8495bf" (UID: "d82e186e-8995-43ff-a65f-a1918f8495bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.318319 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82e186e-8995-43ff-a65f-a1918f8495bf-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.782935 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08017a1f-9b75-4907-8e99-fbec667904c2","Type":"ContainerStarted","Data":"3fd318956397264b23fee02e5f42d27bb86bd013beddacad4cf7f1514dfaefa2"} Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.793336 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7fe28e0b-21c6-4ae2-8878-698855b69670","Type":"ContainerStarted","Data":"3c24dac84ca25ee5a5797b549594363c01fff17fa70070d9d87a544992e98f8c"} Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.805374 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d82e186e-8995-43ff-a65f-a1918f8495bf","Type":"ContainerDied","Data":"5058d9f053dcf537e0f32faa9b8bc73dbc4f1b89d288239d78c30b4e5490254f"} Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.805428 4937 scope.go:117] "RemoveContainer" containerID="bfccc14221b0d9a4f1750c7e8735fe2845ed32f2544207e4650092452433a2ed" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.805656 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.807709 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.807663515 podStartE2EDuration="4.807663515s" podCreationTimestamp="2026-01-23 06:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:56:14.804872819 +0000 UTC m=+1374.608639482" watchObservedRunningTime="2026-01-23 06:56:14.807663515 +0000 UTC m=+1374.611430168" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.826815 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.826797644 podStartE2EDuration="6.826797644s" podCreationTimestamp="2026-01-23 06:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:56:14.824953063 +0000 UTC m=+1374.628719716" watchObservedRunningTime="2026-01-23 06:56:14.826797644 +0000 UTC m=+1374.630564287" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.904283 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.921646 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.936085 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:56:14 crc kubenswrapper[4937]: E0123 06:56:14.936534 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93b93d08-1c46-4589-bb41-3a6353b03d7f" containerName="mariadb-database-create" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.936554 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="93b93d08-1c46-4589-bb41-3a6353b03d7f" containerName="mariadb-database-create" Jan 23 06:56:14 crc kubenswrapper[4937]: E0123 06:56:14.936571 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da4d3482-c2df-46d9-88ad-d8199d012375" containerName="mariadb-database-create" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.936577 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4d3482-c2df-46d9-88ad-d8199d012375" containerName="mariadb-database-create" Jan 23 06:56:14 crc kubenswrapper[4937]: E0123 06:56:14.936602 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a931af15-27d9-46f8-ab92-0af812ed5cd5" containerName="mariadb-account-create-update" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.936609 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="a931af15-27d9-46f8-ab92-0af812ed5cd5" containerName="mariadb-account-create-update" Jan 23 06:56:14 crc kubenswrapper[4937]: E0123 06:56:14.936625 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be" containerName="mariadb-account-create-update" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.936631 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be" containerName="mariadb-account-create-update" Jan 23 06:56:14 crc kubenswrapper[4937]: E0123 06:56:14.936649 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82e186e-8995-43ff-a65f-a1918f8495bf" containerName="sg-core" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.936655 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82e186e-8995-43ff-a65f-a1918f8495bf" containerName="sg-core" Jan 23 06:56:14 crc kubenswrapper[4937]: E0123 06:56:14.936665 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82e186e-8995-43ff-a65f-a1918f8495bf" containerName="proxy-httpd" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.936671 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82e186e-8995-43ff-a65f-a1918f8495bf" containerName="proxy-httpd" Jan 23 06:56:14 crc kubenswrapper[4937]: E0123 06:56:14.936690 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82e186e-8995-43ff-a65f-a1918f8495bf" containerName="ceilometer-notification-agent" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.936697 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82e186e-8995-43ff-a65f-a1918f8495bf" containerName="ceilometer-notification-agent" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.936882 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="a931af15-27d9-46f8-ab92-0af812ed5cd5" containerName="mariadb-account-create-update" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.936902 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="da4d3482-c2df-46d9-88ad-d8199d012375" containerName="mariadb-database-create" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.936915 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="d82e186e-8995-43ff-a65f-a1918f8495bf" containerName="sg-core" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.936925 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="d82e186e-8995-43ff-a65f-a1918f8495bf" containerName="proxy-httpd" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.936941 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="d82e186e-8995-43ff-a65f-a1918f8495bf" containerName="ceilometer-notification-agent" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.936948 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="93b93d08-1c46-4589-bb41-3a6353b03d7f" containerName="mariadb-database-create" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.936961 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be" containerName="mariadb-account-create-update" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.940216 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.942434 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.942672 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 06:56:14 crc kubenswrapper[4937]: I0123 06:56:14.960792 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.063945 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-26qqm"] Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.065206 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-26qqm" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.067202 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-qsw8t" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.068482 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.068881 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.080750 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-26qqm"] Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.134505 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccd98200-80ca-4223-aa9a-720238626619-log-httpd\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.134858 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crzst\" (UniqueName: \"kubernetes.io/projected/ccd98200-80ca-4223-aa9a-720238626619-kube-api-access-crzst\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.134896 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.134976 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-scripts\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.136555 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-config-data\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.136836 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.136907 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccd98200-80ca-4223-aa9a-720238626619-run-httpd\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.238267 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-scripts\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.238334 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-config-data\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.238410 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-26qqm\" (UID: \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\") " pod="openstack/nova-cell0-conductor-db-sync-26qqm" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.238456 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.238497 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccd98200-80ca-4223-aa9a-720238626619-run-httpd\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.238546 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmhkm\" (UniqueName: \"kubernetes.io/projected/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-kube-api-access-tmhkm\") pod \"nova-cell0-conductor-db-sync-26qqm\" (UID: \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\") " pod="openstack/nova-cell0-conductor-db-sync-26qqm" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.238584 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-scripts\") pod \"nova-cell0-conductor-db-sync-26qqm\" (UID: \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\") " pod="openstack/nova-cell0-conductor-db-sync-26qqm" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.238668 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccd98200-80ca-4223-aa9a-720238626619-log-httpd\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.238701 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crzst\" (UniqueName: \"kubernetes.io/projected/ccd98200-80ca-4223-aa9a-720238626619-kube-api-access-crzst\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.238728 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.238770 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-config-data\") pod \"nova-cell0-conductor-db-sync-26qqm\" (UID: \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\") " pod="openstack/nova-cell0-conductor-db-sync-26qqm" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.240137 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccd98200-80ca-4223-aa9a-720238626619-log-httpd\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.240916 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccd98200-80ca-4223-aa9a-720238626619-run-httpd\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.246222 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.254429 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-scripts\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.254558 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.256283 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-config-data\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.257870 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crzst\" (UniqueName: \"kubernetes.io/projected/ccd98200-80ca-4223-aa9a-720238626619-kube-api-access-crzst\") pod \"ceilometer-0\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.261551 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.340216 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-26qqm\" (UID: \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\") " pod="openstack/nova-cell0-conductor-db-sync-26qqm" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.340333 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmhkm\" (UniqueName: \"kubernetes.io/projected/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-kube-api-access-tmhkm\") pod \"nova-cell0-conductor-db-sync-26qqm\" (UID: \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\") " pod="openstack/nova-cell0-conductor-db-sync-26qqm" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.340384 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-scripts\") pod \"nova-cell0-conductor-db-sync-26qqm\" (UID: \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\") " pod="openstack/nova-cell0-conductor-db-sync-26qqm" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.340496 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-config-data\") pod \"nova-cell0-conductor-db-sync-26qqm\" (UID: \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\") " pod="openstack/nova-cell0-conductor-db-sync-26qqm" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.343544 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-scripts\") pod \"nova-cell0-conductor-db-sync-26qqm\" (UID: \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\") " pod="openstack/nova-cell0-conductor-db-sync-26qqm" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.344851 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-26qqm\" (UID: \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\") " pod="openstack/nova-cell0-conductor-db-sync-26qqm" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.345451 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-config-data\") pod \"nova-cell0-conductor-db-sync-26qqm\" (UID: \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\") " pod="openstack/nova-cell0-conductor-db-sync-26qqm" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.369354 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmhkm\" (UniqueName: \"kubernetes.io/projected/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-kube-api-access-tmhkm\") pod \"nova-cell0-conductor-db-sync-26qqm\" (UID: \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\") " pod="openstack/nova-cell0-conductor-db-sync-26qqm" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.393010 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-26qqm" Jan 23 06:56:15 crc kubenswrapper[4937]: I0123 06:56:15.925826 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:56:16 crc kubenswrapper[4937]: I0123 06:56:16.543290 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d82e186e-8995-43ff-a65f-a1918f8495bf" path="/var/lib/kubelet/pods/d82e186e-8995-43ff-a65f-a1918f8495bf/volumes" Jan 23 06:56:19 crc kubenswrapper[4937]: I0123 06:56:19.233555 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 06:56:19 crc kubenswrapper[4937]: I0123 06:56:19.233865 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 06:56:19 crc kubenswrapper[4937]: I0123 06:56:19.362874 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 06:56:19 crc kubenswrapper[4937]: I0123 06:56:19.365080 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 06:56:19 crc kubenswrapper[4937]: I0123 06:56:19.527767 4937 scope.go:117] "RemoveContainer" containerID="000f4fbf08213abbde3dddec02a2b46b353b3fd5bbddd5f0e670be55e478e7cf" Jan 23 06:56:19 crc kubenswrapper[4937]: E0123 06:56:19.527977 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(438bef39-6283-4b17-b551-74f127660dbd)\"" pod="openstack/watcher-decision-engine-0" podUID="438bef39-6283-4b17-b551-74f127660dbd" Jan 23 06:56:19 crc kubenswrapper[4937]: I0123 06:56:19.685096 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="b566c034-7853-41ca-ae1d-e0105f73eadf" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.180:8776/healthcheck\": dial tcp 10.217.0.180:8776: connect: connection refused" Jan 23 06:56:19 crc kubenswrapper[4937]: I0123 06:56:19.865943 4937 generic.go:334] "Generic (PLEG): container finished" podID="b566c034-7853-41ca-ae1d-e0105f73eadf" containerID="0ecaa150fc5fa2d4d6a1e88d5bca293ed188e878f9097cedd2ff9690208f593f" exitCode=137 Jan 23 06:56:19 crc kubenswrapper[4937]: I0123 06:56:19.866023 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b566c034-7853-41ca-ae1d-e0105f73eadf","Type":"ContainerDied","Data":"0ecaa150fc5fa2d4d6a1e88d5bca293ed188e878f9097cedd2ff9690208f593f"} Jan 23 06:56:19 crc kubenswrapper[4937]: I0123 06:56:19.866351 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 06:56:19 crc kubenswrapper[4937]: I0123 06:56:19.866514 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.185419 4937 scope.go:117] "RemoveContainer" containerID="409893e7fa7d1be8743a91425df85b01f8059c06ffee495a68d6b638e0c2fc3f" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.411245 4937 scope.go:117] "RemoveContainer" containerID="06a4d7e08f84c2ae242fe9e1f6c6c8470b0636295885dbd9e74b594ba5dda612" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.422977 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.553817 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-dns-svc\") pod \"102f97ec-9a78-4b42-8fbf-4ece2671905e\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.553905 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-config\") pod \"102f97ec-9a78-4b42-8fbf-4ece2671905e\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.554064 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc2n5\" (UniqueName: \"kubernetes.io/projected/102f97ec-9a78-4b42-8fbf-4ece2671905e-kube-api-access-bc2n5\") pod \"102f97ec-9a78-4b42-8fbf-4ece2671905e\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.554090 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-dns-swift-storage-0\") pod \"102f97ec-9a78-4b42-8fbf-4ece2671905e\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.554121 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-ovsdbserver-nb\") pod \"102f97ec-9a78-4b42-8fbf-4ece2671905e\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.554159 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-ovsdbserver-sb\") pod \"102f97ec-9a78-4b42-8fbf-4ece2671905e\" (UID: \"102f97ec-9a78-4b42-8fbf-4ece2671905e\") " Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.582437 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/102f97ec-9a78-4b42-8fbf-4ece2671905e-kube-api-access-bc2n5" (OuterVolumeSpecName: "kube-api-access-bc2n5") pod "102f97ec-9a78-4b42-8fbf-4ece2671905e" (UID: "102f97ec-9a78-4b42-8fbf-4ece2671905e"). InnerVolumeSpecName "kube-api-access-bc2n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.627896 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "102f97ec-9a78-4b42-8fbf-4ece2671905e" (UID: "102f97ec-9a78-4b42-8fbf-4ece2671905e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.670332 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.670739 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc2n5\" (UniqueName: \"kubernetes.io/projected/102f97ec-9a78-4b42-8fbf-4ece2671905e-kube-api-access-bc2n5\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.670825 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "102f97ec-9a78-4b42-8fbf-4ece2671905e" (UID: "102f97ec-9a78-4b42-8fbf-4ece2671905e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.677527 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.681319 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-config" (OuterVolumeSpecName: "config") pod "102f97ec-9a78-4b42-8fbf-4ece2671905e" (UID: "102f97ec-9a78-4b42-8fbf-4ece2671905e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.700946 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "102f97ec-9a78-4b42-8fbf-4ece2671905e" (UID: "102f97ec-9a78-4b42-8fbf-4ece2671905e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.709750 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "102f97ec-9a78-4b42-8fbf-4ece2671905e" (UID: "102f97ec-9a78-4b42-8fbf-4ece2671905e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.771537 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b566c034-7853-41ca-ae1d-e0105f73eadf-etc-machine-id\") pod \"b566c034-7853-41ca-ae1d-e0105f73eadf\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.771662 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grqw2\" (UniqueName: \"kubernetes.io/projected/b566c034-7853-41ca-ae1d-e0105f73eadf-kube-api-access-grqw2\") pod \"b566c034-7853-41ca-ae1d-e0105f73eadf\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.771746 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-combined-ca-bundle\") pod \"b566c034-7853-41ca-ae1d-e0105f73eadf\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.771768 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b566c034-7853-41ca-ae1d-e0105f73eadf-logs\") pod \"b566c034-7853-41ca-ae1d-e0105f73eadf\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.771827 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-config-data-custom\") pod \"b566c034-7853-41ca-ae1d-e0105f73eadf\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.771844 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-config-data\") pod \"b566c034-7853-41ca-ae1d-e0105f73eadf\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.771865 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-scripts\") pod \"b566c034-7853-41ca-ae1d-e0105f73eadf\" (UID: \"b566c034-7853-41ca-ae1d-e0105f73eadf\") " Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.772326 4937 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.772337 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.772346 4937 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.772354 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/102f97ec-9a78-4b42-8fbf-4ece2671905e-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.772533 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b566c034-7853-41ca-ae1d-e0105f73eadf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b566c034-7853-41ca-ae1d-e0105f73eadf" (UID: "b566c034-7853-41ca-ae1d-e0105f73eadf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.774653 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b566c034-7853-41ca-ae1d-e0105f73eadf-logs" (OuterVolumeSpecName: "logs") pod "b566c034-7853-41ca-ae1d-e0105f73eadf" (UID: "b566c034-7853-41ca-ae1d-e0105f73eadf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.784178 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b566c034-7853-41ca-ae1d-e0105f73eadf-kube-api-access-grqw2" (OuterVolumeSpecName: "kube-api-access-grqw2") pod "b566c034-7853-41ca-ae1d-e0105f73eadf" (UID: "b566c034-7853-41ca-ae1d-e0105f73eadf"). InnerVolumeSpecName "kube-api-access-grqw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.789437 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.790869 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b566c034-7853-41ca-ae1d-e0105f73eadf" (UID: "b566c034-7853-41ca-ae1d-e0105f73eadf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.790961 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-scripts" (OuterVolumeSpecName: "scripts") pod "b566c034-7853-41ca-ae1d-e0105f73eadf" (UID: "b566c034-7853-41ca-ae1d-e0105f73eadf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.839779 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-config-data" (OuterVolumeSpecName: "config-data") pod "b566c034-7853-41ca-ae1d-e0105f73eadf" (UID: "b566c034-7853-41ca-ae1d-e0105f73eadf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.861967 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" podUID="102f97ec-9a78-4b42-8fbf-4ece2671905e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.181:5353: i/o timeout" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.869875 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b566c034-7853-41ca-ae1d-e0105f73eadf" (UID: "b566c034-7853-41ca-ae1d-e0105f73eadf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.874933 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.874966 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b566c034-7853-41ca-ae1d-e0105f73eadf-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.874978 4937 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.874985 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.874993 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b566c034-7853-41ca-ae1d-e0105f73eadf-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.875001 4937 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b566c034-7853-41ca-ae1d-e0105f73eadf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.875009 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grqw2\" (UniqueName: \"kubernetes.io/projected/b566c034-7853-41ca-ae1d-e0105f73eadf-kube-api-access-grqw2\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.894176 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b4d887c1-2f50-42c9-9adf-3f4fe512f399","Type":"ContainerStarted","Data":"5750392b4b35f39404c5d2c7786f95bf82c987ed657f1f6578a1286ddaa7bdd5"} Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.897173 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.897232 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b566c034-7853-41ca-ae1d-e0105f73eadf","Type":"ContainerDied","Data":"cc5f62d58b429e223ced01c29f11fc99bf17f966af06d7fa3dd6d930190f36e1"} Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.897277 4937 scope.go:117] "RemoveContainer" containerID="0ecaa150fc5fa2d4d6a1e88d5bca293ed188e878f9097cedd2ff9690208f593f" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.905294 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" event={"ID":"102f97ec-9a78-4b42-8fbf-4ece2671905e","Type":"ContainerDied","Data":"22803e91ef510242cba01e4feb549e807ba6e9e40f529d64333e3b166eab89f7"} Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.905423 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9c86dc6f7-zv9mp" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.914353 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.915131 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.925035 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccd98200-80ca-4223-aa9a-720238626619","Type":"ContainerStarted","Data":"d3b8cfb6307d6d3120f7b1ffe185cdf5ef9156a0e4364144f89240d667610cdf"} Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.960197 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.620038913 podStartE2EDuration="25.960180198s" podCreationTimestamp="2026-01-23 06:55:55 +0000 UTC" firstStartedPulling="2026-01-23 06:55:56.859172613 +0000 UTC m=+1356.662939266" lastFinishedPulling="2026-01-23 06:56:20.199313898 +0000 UTC m=+1380.003080551" observedRunningTime="2026-01-23 06:56:20.914557271 +0000 UTC m=+1380.718323934" watchObservedRunningTime="2026-01-23 06:56:20.960180198 +0000 UTC m=+1380.763946851" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.964951 4937 scope.go:117] "RemoveContainer" containerID="e76edf13fa241ebcdeddf03541eeb969ec8bf71cbe4a4c2d3e60165f58891823" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.970678 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.980548 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:20 crc kubenswrapper[4937]: I0123 06:56:20.982718 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.002192 4937 scope.go:117] "RemoveContainer" containerID="c7f213f367ca15e61f13e7b25bc9b932e54424a196a8a8633af88802ff62f8a8" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.015654 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-26qqm"] Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.028641 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.033548 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9c86dc6f7-zv9mp"] Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.043187 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 06:56:21 crc kubenswrapper[4937]: E0123 06:56:21.043670 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102f97ec-9a78-4b42-8fbf-4ece2671905e" containerName="init" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.043689 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="102f97ec-9a78-4b42-8fbf-4ece2671905e" containerName="init" Jan 23 06:56:21 crc kubenswrapper[4937]: E0123 06:56:21.043702 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102f97ec-9a78-4b42-8fbf-4ece2671905e" containerName="dnsmasq-dns" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.043712 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="102f97ec-9a78-4b42-8fbf-4ece2671905e" containerName="dnsmasq-dns" Jan 23 06:56:21 crc kubenswrapper[4937]: E0123 06:56:21.043726 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b566c034-7853-41ca-ae1d-e0105f73eadf" containerName="cinder-api-log" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.043732 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="b566c034-7853-41ca-ae1d-e0105f73eadf" containerName="cinder-api-log" Jan 23 06:56:21 crc kubenswrapper[4937]: E0123 06:56:21.043755 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b566c034-7853-41ca-ae1d-e0105f73eadf" containerName="cinder-api" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.043760 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="b566c034-7853-41ca-ae1d-e0105f73eadf" containerName="cinder-api" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.043946 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="b566c034-7853-41ca-ae1d-e0105f73eadf" containerName="cinder-api" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.043972 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="b566c034-7853-41ca-ae1d-e0105f73eadf" containerName="cinder-api-log" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.043983 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="102f97ec-9a78-4b42-8fbf-4ece2671905e" containerName="dnsmasq-dns" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.044882 4937 scope.go:117] "RemoveContainer" containerID="a6cda470b094e213f2498eb3fca45cd251c1d3fd4296d924c0f9f7dd7e9ff3ea" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.044990 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.048059 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.048308 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.049875 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.052221 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9c86dc6f7-zv9mp"] Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.061249 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.184773 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/045e6674-9717-4cee-960c-7d049e797f45-public-tls-certs\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.184838 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/045e6674-9717-4cee-960c-7d049e797f45-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.184887 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/045e6674-9717-4cee-960c-7d049e797f45-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.184926 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/045e6674-9717-4cee-960c-7d049e797f45-config-data\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.184961 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/045e6674-9717-4cee-960c-7d049e797f45-logs\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.185002 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4t52\" (UniqueName: \"kubernetes.io/projected/045e6674-9717-4cee-960c-7d049e797f45-kube-api-access-z4t52\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.185024 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/045e6674-9717-4cee-960c-7d049e797f45-etc-machine-id\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.185058 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/045e6674-9717-4cee-960c-7d049e797f45-config-data-custom\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.185104 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/045e6674-9717-4cee-960c-7d049e797f45-scripts\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.286637 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/045e6674-9717-4cee-960c-7d049e797f45-public-tls-certs\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.286911 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/045e6674-9717-4cee-960c-7d049e797f45-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.286957 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/045e6674-9717-4cee-960c-7d049e797f45-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.286989 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/045e6674-9717-4cee-960c-7d049e797f45-config-data\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.287017 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/045e6674-9717-4cee-960c-7d049e797f45-logs\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.287047 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4t52\" (UniqueName: \"kubernetes.io/projected/045e6674-9717-4cee-960c-7d049e797f45-kube-api-access-z4t52\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.287068 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/045e6674-9717-4cee-960c-7d049e797f45-etc-machine-id\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.287105 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/045e6674-9717-4cee-960c-7d049e797f45-config-data-custom\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.287144 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/045e6674-9717-4cee-960c-7d049e797f45-scripts\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.287177 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/045e6674-9717-4cee-960c-7d049e797f45-etc-machine-id\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.289002 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/045e6674-9717-4cee-960c-7d049e797f45-logs\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.291058 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/045e6674-9717-4cee-960c-7d049e797f45-public-tls-certs\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.291518 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/045e6674-9717-4cee-960c-7d049e797f45-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.291852 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/045e6674-9717-4cee-960c-7d049e797f45-scripts\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.292554 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.293261 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/045e6674-9717-4cee-960c-7d049e797f45-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.295072 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/045e6674-9717-4cee-960c-7d049e797f45-config-data\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.297205 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/045e6674-9717-4cee-960c-7d049e797f45-config-data-custom\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.306922 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4t52\" (UniqueName: \"kubernetes.io/projected/045e6674-9717-4cee-960c-7d049e797f45-kube-api-access-z4t52\") pod \"cinder-api-0\" (UID: \"045e6674-9717-4cee-960c-7d049e797f45\") " pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.452662 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.937949 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-26qqm" event={"ID":"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3","Type":"ContainerStarted","Data":"1474dccdb0d4b5d84cca6244b233ef377db34db37d69f49f0df1255dc2dd950f"} Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.942544 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:21 crc kubenswrapper[4937]: I0123 06:56:21.942613 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:22 crc kubenswrapper[4937]: I0123 06:56:22.040155 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 06:56:22 crc kubenswrapper[4937]: I0123 06:56:22.546222 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="102f97ec-9a78-4b42-8fbf-4ece2671905e" path="/var/lib/kubelet/pods/102f97ec-9a78-4b42-8fbf-4ece2671905e/volumes" Jan 23 06:56:22 crc kubenswrapper[4937]: I0123 06:56:22.547195 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b566c034-7853-41ca-ae1d-e0105f73eadf" path="/var/lib/kubelet/pods/b566c034-7853-41ca-ae1d-e0105f73eadf/volumes" Jan 23 06:56:22 crc kubenswrapper[4937]: I0123 06:56:22.709149 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6845797bf7-lmcfd" Jan 23 06:56:22 crc kubenswrapper[4937]: I0123 06:56:22.830316 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 06:56:22 crc kubenswrapper[4937]: I0123 06:56:22.830800 4937 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:56:22 crc kubenswrapper[4937]: I0123 06:56:22.838705 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6865b49f4-bgj55"] Jan 23 06:56:22 crc kubenswrapper[4937]: I0123 06:56:22.838981 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6865b49f4-bgj55" podUID="226f524e-e956-468f-998d-693d51ec48a0" containerName="neutron-api" containerID="cri-o://499459158d7beb21ac0fee9cb57ca6ca053ee554c6a8dec1c2799f991c968097" gracePeriod=30 Jan 23 06:56:22 crc kubenswrapper[4937]: I0123 06:56:22.839142 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6865b49f4-bgj55" podUID="226f524e-e956-468f-998d-693d51ec48a0" containerName="neutron-httpd" containerID="cri-o://4cc4b770da1518b11168b354becc611e3693802c8d981e2347509d2b8206c433" gracePeriod=30 Jan 23 06:56:22 crc kubenswrapper[4937]: I0123 06:56:22.896143 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 06:56:22 crc kubenswrapper[4937]: I0123 06:56:22.990439 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"045e6674-9717-4cee-960c-7d049e797f45","Type":"ContainerStarted","Data":"6e922566467ddda11a3f3a6b4420f72741db3b61fff25c7081d1fa5e5ad5c703"} Jan 23 06:56:23 crc kubenswrapper[4937]: I0123 06:56:22.997371 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccd98200-80ca-4223-aa9a-720238626619","Type":"ContainerStarted","Data":"b1ca4e5fd284cd5afa2f79f95e8f46e0d50a9e8c482e04555985a7a0cbf35a05"} Jan 23 06:56:23 crc kubenswrapper[4937]: I0123 06:56:22.997407 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccd98200-80ca-4223-aa9a-720238626619","Type":"ContainerStarted","Data":"f61c9ef70e1617460cf315941b67c3ea9ede154d98a8be37e1d577f3c6f8e6cf"} Jan 23 06:56:23 crc kubenswrapper[4937]: E0123 06:56:23.131958 4937 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod226f524e_e956_468f_998d_693d51ec48a0.slice/crio-4cc4b770da1518b11168b354becc611e3693802c8d981e2347509d2b8206c433.scope\": RecentStats: unable to find data in memory cache]" Jan 23 06:56:24 crc kubenswrapper[4937]: I0123 06:56:24.009345 4937 generic.go:334] "Generic (PLEG): container finished" podID="226f524e-e956-468f-998d-693d51ec48a0" containerID="4cc4b770da1518b11168b354becc611e3693802c8d981e2347509d2b8206c433" exitCode=0 Jan 23 06:56:24 crc kubenswrapper[4937]: I0123 06:56:24.009427 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6865b49f4-bgj55" event={"ID":"226f524e-e956-468f-998d-693d51ec48a0","Type":"ContainerDied","Data":"4cc4b770da1518b11168b354becc611e3693802c8d981e2347509d2b8206c433"} Jan 23 06:56:24 crc kubenswrapper[4937]: I0123 06:56:24.012253 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccd98200-80ca-4223-aa9a-720238626619","Type":"ContainerStarted","Data":"c865b68893963b0227fdfc8fc1c80c7a08702a9eb20e7215684c75932d4a91e0"} Jan 23 06:56:24 crc kubenswrapper[4937]: I0123 06:56:24.014177 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"045e6674-9717-4cee-960c-7d049e797f45","Type":"ContainerStarted","Data":"b721f34b1afb53fdfac399c46544a678c2c216ab9bb24f9720fff312d897dddb"} Jan 23 06:56:24 crc kubenswrapper[4937]: I0123 06:56:24.538892 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:24 crc kubenswrapper[4937]: I0123 06:56:24.539186 4937 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:56:24 crc kubenswrapper[4937]: I0123 06:56:24.655671 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:25 crc kubenswrapper[4937]: I0123 06:56:25.024885 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"045e6674-9717-4cee-960c-7d049e797f45","Type":"ContainerStarted","Data":"f41447e4dd125b195b9fc8f05c6270a249b098b854b564226cf3aa06559c6f71"} Jan 23 06:56:25 crc kubenswrapper[4937]: I0123 06:56:25.025305 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 06:56:25 crc kubenswrapper[4937]: I0123 06:56:25.051297 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.051279 podStartE2EDuration="5.051279s" podCreationTimestamp="2026-01-23 06:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:56:25.04353513 +0000 UTC m=+1384.847301803" watchObservedRunningTime="2026-01-23 06:56:25.051279 +0000 UTC m=+1384.855045653" Jan 23 06:56:28 crc kubenswrapper[4937]: I0123 06:56:28.052390 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 23 06:56:28 crc kubenswrapper[4937]: I0123 06:56:28.053201 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 06:56:28 crc kubenswrapper[4937]: I0123 06:56:28.053937 4937 scope.go:117] "RemoveContainer" containerID="000f4fbf08213abbde3dddec02a2b46b353b3fd5bbddd5f0e670be55e478e7cf" Jan 23 06:56:28 crc kubenswrapper[4937]: I0123 06:56:28.083172 4937 generic.go:334] "Generic (PLEG): container finished" podID="226f524e-e956-468f-998d-693d51ec48a0" containerID="499459158d7beb21ac0fee9cb57ca6ca053ee554c6a8dec1c2799f991c968097" exitCode=0 Jan 23 06:56:28 crc kubenswrapper[4937]: I0123 06:56:28.083238 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6865b49f4-bgj55" event={"ID":"226f524e-e956-468f-998d-693d51ec48a0","Type":"ContainerDied","Data":"499459158d7beb21ac0fee9cb57ca6ca053ee554c6a8dec1c2799f991c968097"} Jan 23 06:56:34 crc kubenswrapper[4937]: I0123 06:56:34.168458 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccd98200-80ca-4223-aa9a-720238626619","Type":"ContainerStarted","Data":"a72df44afe9461ea4e60c9512c3b40368ad6474515e7b1f5b55563e001ca3fa1"} Jan 23 06:56:34 crc kubenswrapper[4937]: I0123 06:56:34.168952 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 06:56:34 crc kubenswrapper[4937]: I0123 06:56:34.168712 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ccd98200-80ca-4223-aa9a-720238626619" containerName="ceilometer-notification-agent" containerID="cri-o://b1ca4e5fd284cd5afa2f79f95e8f46e0d50a9e8c482e04555985a7a0cbf35a05" gracePeriod=30 Jan 23 06:56:34 crc kubenswrapper[4937]: I0123 06:56:34.168697 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ccd98200-80ca-4223-aa9a-720238626619" containerName="sg-core" containerID="cri-o://c865b68893963b0227fdfc8fc1c80c7a08702a9eb20e7215684c75932d4a91e0" gracePeriod=30 Jan 23 06:56:34 crc kubenswrapper[4937]: I0123 06:56:34.168676 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ccd98200-80ca-4223-aa9a-720238626619" containerName="proxy-httpd" containerID="cri-o://a72df44afe9461ea4e60c9512c3b40368ad6474515e7b1f5b55563e001ca3fa1" gracePeriod=30 Jan 23 06:56:34 crc kubenswrapper[4937]: I0123 06:56:34.169068 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ccd98200-80ca-4223-aa9a-720238626619" containerName="ceilometer-central-agent" containerID="cri-o://f61c9ef70e1617460cf315941b67c3ea9ede154d98a8be37e1d577f3c6f8e6cf" gracePeriod=30 Jan 23 06:56:34 crc kubenswrapper[4937]: I0123 06:56:34.192394 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=13.087471568 podStartE2EDuration="20.192376057s" podCreationTimestamp="2026-01-23 06:56:14 +0000 UTC" firstStartedPulling="2026-01-23 06:56:20.803839877 +0000 UTC m=+1380.607606530" lastFinishedPulling="2026-01-23 06:56:27.908744366 +0000 UTC m=+1387.712511019" observedRunningTime="2026-01-23 06:56:34.18880924 +0000 UTC m=+1393.992575903" watchObservedRunningTime="2026-01-23 06:56:34.192376057 +0000 UTC m=+1393.996142710" Jan 23 06:56:35 crc kubenswrapper[4937]: I0123 06:56:35.181941 4937 generic.go:334] "Generic (PLEG): container finished" podID="ccd98200-80ca-4223-aa9a-720238626619" containerID="c865b68893963b0227fdfc8fc1c80c7a08702a9eb20e7215684c75932d4a91e0" exitCode=2 Jan 23 06:56:35 crc kubenswrapper[4937]: I0123 06:56:35.182238 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccd98200-80ca-4223-aa9a-720238626619","Type":"ContainerDied","Data":"c865b68893963b0227fdfc8fc1c80c7a08702a9eb20e7215684c75932d4a91e0"} Jan 23 06:56:35 crc kubenswrapper[4937]: I0123 06:56:35.443311 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 23 06:56:35 crc kubenswrapper[4937]: I0123 06:56:35.899089 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.033938 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-ovndb-tls-certs\") pod \"226f524e-e956-468f-998d-693d51ec48a0\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.034025 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqj9d\" (UniqueName: \"kubernetes.io/projected/226f524e-e956-468f-998d-693d51ec48a0-kube-api-access-jqj9d\") pod \"226f524e-e956-468f-998d-693d51ec48a0\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.034123 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-httpd-config\") pod \"226f524e-e956-468f-998d-693d51ec48a0\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.034167 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-combined-ca-bundle\") pod \"226f524e-e956-468f-998d-693d51ec48a0\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.034211 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-config\") pod \"226f524e-e956-468f-998d-693d51ec48a0\" (UID: \"226f524e-e956-468f-998d-693d51ec48a0\") " Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.041898 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226f524e-e956-468f-998d-693d51ec48a0-kube-api-access-jqj9d" (OuterVolumeSpecName: "kube-api-access-jqj9d") pod "226f524e-e956-468f-998d-693d51ec48a0" (UID: "226f524e-e956-468f-998d-693d51ec48a0"). InnerVolumeSpecName "kube-api-access-jqj9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.043749 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "226f524e-e956-468f-998d-693d51ec48a0" (UID: "226f524e-e956-468f-998d-693d51ec48a0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.099843 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "226f524e-e956-468f-998d-693d51ec48a0" (UID: "226f524e-e956-468f-998d-693d51ec48a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.105827 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-config" (OuterVolumeSpecName: "config") pod "226f524e-e956-468f-998d-693d51ec48a0" (UID: "226f524e-e956-468f-998d-693d51ec48a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.134921 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "226f524e-e956-468f-998d-693d51ec48a0" (UID: "226f524e-e956-468f-998d-693d51ec48a0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.136108 4937 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.136129 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqj9d\" (UniqueName: \"kubernetes.io/projected/226f524e-e956-468f-998d-693d51ec48a0-kube-api-access-jqj9d\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.136140 4937 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.136152 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.136160 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/226f524e-e956-468f-998d-693d51ec48a0-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.191902 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"438bef39-6283-4b17-b551-74f127660dbd","Type":"ContainerStarted","Data":"ec011ceb83ab32fe824adbf1062e76c24aa5a13dc48cadc5bf2a3af5f1d8cfe6"} Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.195331 4937 generic.go:334] "Generic (PLEG): container finished" podID="ccd98200-80ca-4223-aa9a-720238626619" containerID="f61c9ef70e1617460cf315941b67c3ea9ede154d98a8be37e1d577f3c6f8e6cf" exitCode=0 Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.195405 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccd98200-80ca-4223-aa9a-720238626619","Type":"ContainerDied","Data":"f61c9ef70e1617460cf315941b67c3ea9ede154d98a8be37e1d577f3c6f8e6cf"} Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.197059 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-26qqm" event={"ID":"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3","Type":"ContainerStarted","Data":"da55b988ef3e3f777e74f987057d7267952014001749ab4eba172f45c88978e7"} Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.198709 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6865b49f4-bgj55" Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.198696 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6865b49f4-bgj55" event={"ID":"226f524e-e956-468f-998d-693d51ec48a0","Type":"ContainerDied","Data":"dbeba780e2e819e5718d3ef45d69fb439880c5628be76d89eda2da36a3c53ca6"} Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.198794 4937 scope.go:117] "RemoveContainer" containerID="4cc4b770da1518b11168b354becc611e3693802c8d981e2347509d2b8206c433" Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.228885 4937 scope.go:117] "RemoveContainer" containerID="499459158d7beb21ac0fee9cb57ca6ca053ee554c6a8dec1c2799f991c968097" Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.235861 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-26qqm" podStartSLOduration=6.811951424 podStartE2EDuration="21.235839931s" podCreationTimestamp="2026-01-23 06:56:15 +0000 UTC" firstStartedPulling="2026-01-23 06:56:20.987833978 +0000 UTC m=+1380.791600621" lastFinishedPulling="2026-01-23 06:56:35.411722475 +0000 UTC m=+1395.215489128" observedRunningTime="2026-01-23 06:56:36.228156703 +0000 UTC m=+1396.031923346" watchObservedRunningTime="2026-01-23 06:56:36.235839931 +0000 UTC m=+1396.039606584" Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.269915 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6865b49f4-bgj55"] Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.280134 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6865b49f4-bgj55"] Jan 23 06:56:36 crc kubenswrapper[4937]: I0123 06:56:36.549105 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="226f524e-e956-468f-998d-693d51ec48a0" path="/var/lib/kubelet/pods/226f524e-e956-468f-998d-693d51ec48a0/volumes" Jan 23 06:56:38 crc kubenswrapper[4937]: I0123 06:56:38.053064 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 06:56:38 crc kubenswrapper[4937]: I0123 06:56:38.086232 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 23 06:56:38 crc kubenswrapper[4937]: I0123 06:56:38.220358 4937 generic.go:334] "Generic (PLEG): container finished" podID="ccd98200-80ca-4223-aa9a-720238626619" containerID="b1ca4e5fd284cd5afa2f79f95e8f46e0d50a9e8c482e04555985a7a0cbf35a05" exitCode=0 Jan 23 06:56:38 crc kubenswrapper[4937]: I0123 06:56:38.220400 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccd98200-80ca-4223-aa9a-720238626619","Type":"ContainerDied","Data":"b1ca4e5fd284cd5afa2f79f95e8f46e0d50a9e8c482e04555985a7a0cbf35a05"} Jan 23 06:56:38 crc kubenswrapper[4937]: I0123 06:56:38.220641 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 23 06:56:38 crc kubenswrapper[4937]: I0123 06:56:38.257307 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 23 06:56:38 crc kubenswrapper[4937]: I0123 06:56:38.299442 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:56:40 crc kubenswrapper[4937]: I0123 06:56:40.245864 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="438bef39-6283-4b17-b551-74f127660dbd" containerName="watcher-decision-engine" containerID="cri-o://ec011ceb83ab32fe824adbf1062e76c24aa5a13dc48cadc5bf2a3af5f1d8cfe6" gracePeriod=30 Jan 23 06:56:41 crc kubenswrapper[4937]: I0123 06:56:41.804157 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 06:56:41 crc kubenswrapper[4937]: I0123 06:56:41.939512 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/438bef39-6283-4b17-b551-74f127660dbd-custom-prometheus-ca\") pod \"438bef39-6283-4b17-b551-74f127660dbd\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " Jan 23 06:56:41 crc kubenswrapper[4937]: I0123 06:56:41.939746 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438bef39-6283-4b17-b551-74f127660dbd-combined-ca-bundle\") pod \"438bef39-6283-4b17-b551-74f127660dbd\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " Jan 23 06:56:41 crc kubenswrapper[4937]: I0123 06:56:41.939794 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/438bef39-6283-4b17-b551-74f127660dbd-logs\") pod \"438bef39-6283-4b17-b551-74f127660dbd\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " Jan 23 06:56:41 crc kubenswrapper[4937]: I0123 06:56:41.939892 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/438bef39-6283-4b17-b551-74f127660dbd-config-data\") pod \"438bef39-6283-4b17-b551-74f127660dbd\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " Jan 23 06:56:41 crc kubenswrapper[4937]: I0123 06:56:41.940157 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/438bef39-6283-4b17-b551-74f127660dbd-logs" (OuterVolumeSpecName: "logs") pod "438bef39-6283-4b17-b551-74f127660dbd" (UID: "438bef39-6283-4b17-b551-74f127660dbd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:56:41 crc kubenswrapper[4937]: I0123 06:56:41.940422 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljs26\" (UniqueName: \"kubernetes.io/projected/438bef39-6283-4b17-b551-74f127660dbd-kube-api-access-ljs26\") pod \"438bef39-6283-4b17-b551-74f127660dbd\" (UID: \"438bef39-6283-4b17-b551-74f127660dbd\") " Jan 23 06:56:41 crc kubenswrapper[4937]: I0123 06:56:41.940946 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/438bef39-6283-4b17-b551-74f127660dbd-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:41 crc kubenswrapper[4937]: I0123 06:56:41.949868 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/438bef39-6283-4b17-b551-74f127660dbd-kube-api-access-ljs26" (OuterVolumeSpecName: "kube-api-access-ljs26") pod "438bef39-6283-4b17-b551-74f127660dbd" (UID: "438bef39-6283-4b17-b551-74f127660dbd"). InnerVolumeSpecName "kube-api-access-ljs26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:41 crc kubenswrapper[4937]: I0123 06:56:41.972713 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438bef39-6283-4b17-b551-74f127660dbd-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "438bef39-6283-4b17-b551-74f127660dbd" (UID: "438bef39-6283-4b17-b551-74f127660dbd"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:41 crc kubenswrapper[4937]: I0123 06:56:41.973040 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438bef39-6283-4b17-b551-74f127660dbd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "438bef39-6283-4b17-b551-74f127660dbd" (UID: "438bef39-6283-4b17-b551-74f127660dbd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:41 crc kubenswrapper[4937]: I0123 06:56:41.997807 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/438bef39-6283-4b17-b551-74f127660dbd-config-data" (OuterVolumeSpecName: "config-data") pod "438bef39-6283-4b17-b551-74f127660dbd" (UID: "438bef39-6283-4b17-b551-74f127660dbd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.042986 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljs26\" (UniqueName: \"kubernetes.io/projected/438bef39-6283-4b17-b551-74f127660dbd-kube-api-access-ljs26\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.043022 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/438bef39-6283-4b17-b551-74f127660dbd-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.043035 4937 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/438bef39-6283-4b17-b551-74f127660dbd-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.043045 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/438bef39-6283-4b17-b551-74f127660dbd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.263447 4937 generic.go:334] "Generic (PLEG): container finished" podID="438bef39-6283-4b17-b551-74f127660dbd" containerID="ec011ceb83ab32fe824adbf1062e76c24aa5a13dc48cadc5bf2a3af5f1d8cfe6" exitCode=0 Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.263496 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.263790 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"438bef39-6283-4b17-b551-74f127660dbd","Type":"ContainerDied","Data":"ec011ceb83ab32fe824adbf1062e76c24aa5a13dc48cadc5bf2a3af5f1d8cfe6"} Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.263889 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"438bef39-6283-4b17-b551-74f127660dbd","Type":"ContainerDied","Data":"322c78f8fc797238da34db8f9417ca80cdf76f02385c20b58b0e0036de213d4a"} Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.263916 4937 scope.go:117] "RemoveContainer" containerID="ec011ceb83ab32fe824adbf1062e76c24aa5a13dc48cadc5bf2a3af5f1d8cfe6" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.287517 4937 scope.go:117] "RemoveContainer" containerID="000f4fbf08213abbde3dddec02a2b46b353b3fd5bbddd5f0e670be55e478e7cf" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.294666 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.313450 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.381155 4937 scope.go:117] "RemoveContainer" containerID="ec011ceb83ab32fe824adbf1062e76c24aa5a13dc48cadc5bf2a3af5f1d8cfe6" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.381512 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:56:42 crc kubenswrapper[4937]: E0123 06:56:42.382004 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226f524e-e956-468f-998d-693d51ec48a0" containerName="neutron-httpd" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.382024 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="226f524e-e956-468f-998d-693d51ec48a0" containerName="neutron-httpd" Jan 23 06:56:42 crc kubenswrapper[4937]: E0123 06:56:42.382039 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438bef39-6283-4b17-b551-74f127660dbd" containerName="watcher-decision-engine" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.382048 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="438bef39-6283-4b17-b551-74f127660dbd" containerName="watcher-decision-engine" Jan 23 06:56:42 crc kubenswrapper[4937]: E0123 06:56:42.382063 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438bef39-6283-4b17-b551-74f127660dbd" containerName="watcher-decision-engine" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.382070 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="438bef39-6283-4b17-b551-74f127660dbd" containerName="watcher-decision-engine" Jan 23 06:56:42 crc kubenswrapper[4937]: E0123 06:56:42.382095 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226f524e-e956-468f-998d-693d51ec48a0" containerName="neutron-api" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.382102 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="226f524e-e956-468f-998d-693d51ec48a0" containerName="neutron-api" Jan 23 06:56:42 crc kubenswrapper[4937]: E0123 06:56:42.382121 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438bef39-6283-4b17-b551-74f127660dbd" containerName="watcher-decision-engine" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.382128 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="438bef39-6283-4b17-b551-74f127660dbd" containerName="watcher-decision-engine" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.382352 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="226f524e-e956-468f-998d-693d51ec48a0" containerName="neutron-httpd" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.382378 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="226f524e-e956-468f-998d-693d51ec48a0" containerName="neutron-api" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.382386 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="438bef39-6283-4b17-b551-74f127660dbd" containerName="watcher-decision-engine" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.382395 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="438bef39-6283-4b17-b551-74f127660dbd" containerName="watcher-decision-engine" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.382407 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="438bef39-6283-4b17-b551-74f127660dbd" containerName="watcher-decision-engine" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.383195 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 06:56:42 crc kubenswrapper[4937]: E0123 06:56:42.384193 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec011ceb83ab32fe824adbf1062e76c24aa5a13dc48cadc5bf2a3af5f1d8cfe6\": container with ID starting with ec011ceb83ab32fe824adbf1062e76c24aa5a13dc48cadc5bf2a3af5f1d8cfe6 not found: ID does not exist" containerID="ec011ceb83ab32fe824adbf1062e76c24aa5a13dc48cadc5bf2a3af5f1d8cfe6" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.384230 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec011ceb83ab32fe824adbf1062e76c24aa5a13dc48cadc5bf2a3af5f1d8cfe6"} err="failed to get container status \"ec011ceb83ab32fe824adbf1062e76c24aa5a13dc48cadc5bf2a3af5f1d8cfe6\": rpc error: code = NotFound desc = could not find container \"ec011ceb83ab32fe824adbf1062e76c24aa5a13dc48cadc5bf2a3af5f1d8cfe6\": container with ID starting with ec011ceb83ab32fe824adbf1062e76c24aa5a13dc48cadc5bf2a3af5f1d8cfe6 not found: ID does not exist" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.384259 4937 scope.go:117] "RemoveContainer" containerID="000f4fbf08213abbde3dddec02a2b46b353b3fd5bbddd5f0e670be55e478e7cf" Jan 23 06:56:42 crc kubenswrapper[4937]: E0123 06:56:42.384650 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"000f4fbf08213abbde3dddec02a2b46b353b3fd5bbddd5f0e670be55e478e7cf\": container with ID starting with 000f4fbf08213abbde3dddec02a2b46b353b3fd5bbddd5f0e670be55e478e7cf not found: ID does not exist" containerID="000f4fbf08213abbde3dddec02a2b46b353b3fd5bbddd5f0e670be55e478e7cf" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.384675 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000f4fbf08213abbde3dddec02a2b46b353b3fd5bbddd5f0e670be55e478e7cf"} err="failed to get container status \"000f4fbf08213abbde3dddec02a2b46b353b3fd5bbddd5f0e670be55e478e7cf\": rpc error: code = NotFound desc = could not find container \"000f4fbf08213abbde3dddec02a2b46b353b3fd5bbddd5f0e670be55e478e7cf\": container with ID starting with 000f4fbf08213abbde3dddec02a2b46b353b3fd5bbddd5f0e670be55e478e7cf not found: ID does not exist" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.385687 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.392847 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.454714 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/989f59fa-b14d-4d8d-949f-e4e397afdeba-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"989f59fa-b14d-4d8d-949f-e4e397afdeba\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.454849 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989f59fa-b14d-4d8d-949f-e4e397afdeba-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"989f59fa-b14d-4d8d-949f-e4e397afdeba\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.455167 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/989f59fa-b14d-4d8d-949f-e4e397afdeba-config-data\") pod \"watcher-decision-engine-0\" (UID: \"989f59fa-b14d-4d8d-949f-e4e397afdeba\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.455227 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lvn8\" (UniqueName: \"kubernetes.io/projected/989f59fa-b14d-4d8d-949f-e4e397afdeba-kube-api-access-7lvn8\") pod \"watcher-decision-engine-0\" (UID: \"989f59fa-b14d-4d8d-949f-e4e397afdeba\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.455297 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/989f59fa-b14d-4d8d-949f-e4e397afdeba-logs\") pod \"watcher-decision-engine-0\" (UID: \"989f59fa-b14d-4d8d-949f-e4e397afdeba\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.554960 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="438bef39-6283-4b17-b551-74f127660dbd" path="/var/lib/kubelet/pods/438bef39-6283-4b17-b551-74f127660dbd/volumes" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.557063 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/989f59fa-b14d-4d8d-949f-e4e397afdeba-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"989f59fa-b14d-4d8d-949f-e4e397afdeba\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.557193 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989f59fa-b14d-4d8d-949f-e4e397afdeba-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"989f59fa-b14d-4d8d-949f-e4e397afdeba\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.557277 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/989f59fa-b14d-4d8d-949f-e4e397afdeba-config-data\") pod \"watcher-decision-engine-0\" (UID: \"989f59fa-b14d-4d8d-949f-e4e397afdeba\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.557312 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lvn8\" (UniqueName: \"kubernetes.io/projected/989f59fa-b14d-4d8d-949f-e4e397afdeba-kube-api-access-7lvn8\") pod \"watcher-decision-engine-0\" (UID: \"989f59fa-b14d-4d8d-949f-e4e397afdeba\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.557343 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/989f59fa-b14d-4d8d-949f-e4e397afdeba-logs\") pod \"watcher-decision-engine-0\" (UID: \"989f59fa-b14d-4d8d-949f-e4e397afdeba\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.557777 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/989f59fa-b14d-4d8d-949f-e4e397afdeba-logs\") pod \"watcher-decision-engine-0\" (UID: \"989f59fa-b14d-4d8d-949f-e4e397afdeba\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.562553 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/989f59fa-b14d-4d8d-949f-e4e397afdeba-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"989f59fa-b14d-4d8d-949f-e4e397afdeba\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.569086 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/989f59fa-b14d-4d8d-949f-e4e397afdeba-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"989f59fa-b14d-4d8d-949f-e4e397afdeba\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.572368 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/989f59fa-b14d-4d8d-949f-e4e397afdeba-config-data\") pod \"watcher-decision-engine-0\" (UID: \"989f59fa-b14d-4d8d-949f-e4e397afdeba\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.604369 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lvn8\" (UniqueName: \"kubernetes.io/projected/989f59fa-b14d-4d8d-949f-e4e397afdeba-kube-api-access-7lvn8\") pod \"watcher-decision-engine-0\" (UID: \"989f59fa-b14d-4d8d-949f-e4e397afdeba\") " pod="openstack/watcher-decision-engine-0" Jan 23 06:56:42 crc kubenswrapper[4937]: I0123 06:56:42.707540 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 06:56:43 crc kubenswrapper[4937]: I0123 06:56:43.170781 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 06:56:43 crc kubenswrapper[4937]: W0123 06:56:43.173350 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod989f59fa_b14d_4d8d_949f_e4e397afdeba.slice/crio-3c2944458902b6f63fc5f0b12c9a486fca8c9c53b111a974810b1edd46a1b794 WatchSource:0}: Error finding container 3c2944458902b6f63fc5f0b12c9a486fca8c9c53b111a974810b1edd46a1b794: Status 404 returned error can't find the container with id 3c2944458902b6f63fc5f0b12c9a486fca8c9c53b111a974810b1edd46a1b794 Jan 23 06:56:43 crc kubenswrapper[4937]: I0123 06:56:43.275008 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"989f59fa-b14d-4d8d-949f-e4e397afdeba","Type":"ContainerStarted","Data":"3c2944458902b6f63fc5f0b12c9a486fca8c9c53b111a974810b1edd46a1b794"} Jan 23 06:56:43 crc kubenswrapper[4937]: I0123 06:56:43.607724 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:56:43 crc kubenswrapper[4937]: I0123 06:56:43.611664 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7fe28e0b-21c6-4ae2-8878-698855b69670" containerName="glance-log" containerID="cri-o://70d79d155572853e90860dd87fabc9db4f96de74fcce0fbc7e853ca1d2e59d91" gracePeriod=30 Jan 23 06:56:43 crc kubenswrapper[4937]: I0123 06:56:43.612270 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7fe28e0b-21c6-4ae2-8878-698855b69670" containerName="glance-httpd" containerID="cri-o://3c24dac84ca25ee5a5797b549594363c01fff17fa70070d9d87a544992e98f8c" gracePeriod=30 Jan 23 06:56:44 crc kubenswrapper[4937]: I0123 06:56:44.286841 4937 generic.go:334] "Generic (PLEG): container finished" podID="7fe28e0b-21c6-4ae2-8878-698855b69670" containerID="70d79d155572853e90860dd87fabc9db4f96de74fcce0fbc7e853ca1d2e59d91" exitCode=143 Jan 23 06:56:44 crc kubenswrapper[4937]: I0123 06:56:44.286916 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7fe28e0b-21c6-4ae2-8878-698855b69670","Type":"ContainerDied","Data":"70d79d155572853e90860dd87fabc9db4f96de74fcce0fbc7e853ca1d2e59d91"} Jan 23 06:56:44 crc kubenswrapper[4937]: I0123 06:56:44.289161 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"989f59fa-b14d-4d8d-949f-e4e397afdeba","Type":"ContainerStarted","Data":"1a05bc66cf354072bed82fcf9651093693c9a3294530c23d59637114dc9b28e6"} Jan 23 06:56:44 crc kubenswrapper[4937]: I0123 06:56:44.663024 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.663004051 podStartE2EDuration="2.663004051s" podCreationTimestamp="2026-01-23 06:56:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:56:44.313819979 +0000 UTC m=+1404.117586632" watchObservedRunningTime="2026-01-23 06:56:44.663004051 +0000 UTC m=+1404.466770704" Jan 23 06:56:44 crc kubenswrapper[4937]: I0123 06:56:44.671298 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:56:44 crc kubenswrapper[4937]: I0123 06:56:44.671565 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="08017a1f-9b75-4907-8e99-fbec667904c2" containerName="glance-log" containerID="cri-o://ebcd2bf40ed446a7c2f13c548704415cf5ae69e0cb8dc64875bcde8813450de8" gracePeriod=30 Jan 23 06:56:44 crc kubenswrapper[4937]: I0123 06:56:44.671698 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="08017a1f-9b75-4907-8e99-fbec667904c2" containerName="glance-httpd" containerID="cri-o://3fd318956397264b23fee02e5f42d27bb86bd013beddacad4cf7f1514dfaefa2" gracePeriod=30 Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.100549 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.231710 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fe28e0b-21c6-4ae2-8878-698855b69670-logs\") pod \"7fe28e0b-21c6-4ae2-8878-698855b69670\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.231769 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7fe28e0b-21c6-4ae2-8878-698855b69670-httpd-run\") pod \"7fe28e0b-21c6-4ae2-8878-698855b69670\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.231793 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"7fe28e0b-21c6-4ae2-8878-698855b69670\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.231884 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dthj\" (UniqueName: \"kubernetes.io/projected/7fe28e0b-21c6-4ae2-8878-698855b69670-kube-api-access-6dthj\") pod \"7fe28e0b-21c6-4ae2-8878-698855b69670\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.231981 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-config-data\") pod \"7fe28e0b-21c6-4ae2-8878-698855b69670\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.232022 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-public-tls-certs\") pod \"7fe28e0b-21c6-4ae2-8878-698855b69670\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.232051 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-scripts\") pod \"7fe28e0b-21c6-4ae2-8878-698855b69670\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.232070 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-combined-ca-bundle\") pod \"7fe28e0b-21c6-4ae2-8878-698855b69670\" (UID: \"7fe28e0b-21c6-4ae2-8878-698855b69670\") " Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.253242 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fe28e0b-21c6-4ae2-8878-698855b69670-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7fe28e0b-21c6-4ae2-8878-698855b69670" (UID: "7fe28e0b-21c6-4ae2-8878-698855b69670"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.253537 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fe28e0b-21c6-4ae2-8878-698855b69670-logs" (OuterVolumeSpecName: "logs") pod "7fe28e0b-21c6-4ae2-8878-698855b69670" (UID: "7fe28e0b-21c6-4ae2-8878-698855b69670"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.261863 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fe28e0b-21c6-4ae2-8878-698855b69670-kube-api-access-6dthj" (OuterVolumeSpecName: "kube-api-access-6dthj") pod "7fe28e0b-21c6-4ae2-8878-698855b69670" (UID: "7fe28e0b-21c6-4ae2-8878-698855b69670"). InnerVolumeSpecName "kube-api-access-6dthj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.271751 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-scripts" (OuterVolumeSpecName: "scripts") pod "7fe28e0b-21c6-4ae2-8878-698855b69670" (UID: "7fe28e0b-21c6-4ae2-8878-698855b69670"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.296958 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "7fe28e0b-21c6-4ae2-8878-698855b69670" (UID: "7fe28e0b-21c6-4ae2-8878-698855b69670"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.334056 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dthj\" (UniqueName: \"kubernetes.io/projected/7fe28e0b-21c6-4ae2-8878-698855b69670-kube-api-access-6dthj\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.334087 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.334096 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fe28e0b-21c6-4ae2-8878-698855b69670-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.334104 4937 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7fe28e0b-21c6-4ae2-8878-698855b69670-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.334125 4937 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.349487 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ccd98200-80ca-4223-aa9a-720238626619" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.352225 4937 generic.go:334] "Generic (PLEG): container finished" podID="7fe28e0b-21c6-4ae2-8878-698855b69670" containerID="3c24dac84ca25ee5a5797b549594363c01fff17fa70070d9d87a544992e98f8c" exitCode=0 Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.352312 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7fe28e0b-21c6-4ae2-8878-698855b69670","Type":"ContainerDied","Data":"3c24dac84ca25ee5a5797b549594363c01fff17fa70070d9d87a544992e98f8c"} Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.352336 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7fe28e0b-21c6-4ae2-8878-698855b69670","Type":"ContainerDied","Data":"d51134ae252d4dc28fe019eae28859af66c853925683765ec8acced4b7ce9971"} Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.352352 4937 scope.go:117] "RemoveContainer" containerID="3c24dac84ca25ee5a5797b549594363c01fff17fa70070d9d87a544992e98f8c" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.352516 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.379662 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fe28e0b-21c6-4ae2-8878-698855b69670" (UID: "7fe28e0b-21c6-4ae2-8878-698855b69670"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.383674 4937 generic.go:334] "Generic (PLEG): container finished" podID="08017a1f-9b75-4907-8e99-fbec667904c2" containerID="ebcd2bf40ed446a7c2f13c548704415cf5ae69e0cb8dc64875bcde8813450de8" exitCode=143 Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.384293 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08017a1f-9b75-4907-8e99-fbec667904c2","Type":"ContainerDied","Data":"ebcd2bf40ed446a7c2f13c548704415cf5ae69e0cb8dc64875bcde8813450de8"} Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.409093 4937 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.421805 4937 scope.go:117] "RemoveContainer" containerID="70d79d155572853e90860dd87fabc9db4f96de74fcce0fbc7e853ca1d2e59d91" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.453266 4937 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.453296 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.465774 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7fe28e0b-21c6-4ae2-8878-698855b69670" (UID: "7fe28e0b-21c6-4ae2-8878-698855b69670"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.480019 4937 scope.go:117] "RemoveContainer" containerID="3c24dac84ca25ee5a5797b549594363c01fff17fa70070d9d87a544992e98f8c" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.483131 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-config-data" (OuterVolumeSpecName: "config-data") pod "7fe28e0b-21c6-4ae2-8878-698855b69670" (UID: "7fe28e0b-21c6-4ae2-8878-698855b69670"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:45 crc kubenswrapper[4937]: E0123 06:56:45.483140 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c24dac84ca25ee5a5797b549594363c01fff17fa70070d9d87a544992e98f8c\": container with ID starting with 3c24dac84ca25ee5a5797b549594363c01fff17fa70070d9d87a544992e98f8c not found: ID does not exist" containerID="3c24dac84ca25ee5a5797b549594363c01fff17fa70070d9d87a544992e98f8c" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.483218 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c24dac84ca25ee5a5797b549594363c01fff17fa70070d9d87a544992e98f8c"} err="failed to get container status \"3c24dac84ca25ee5a5797b549594363c01fff17fa70070d9d87a544992e98f8c\": rpc error: code = NotFound desc = could not find container \"3c24dac84ca25ee5a5797b549594363c01fff17fa70070d9d87a544992e98f8c\": container with ID starting with 3c24dac84ca25ee5a5797b549594363c01fff17fa70070d9d87a544992e98f8c not found: ID does not exist" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.483244 4937 scope.go:117] "RemoveContainer" containerID="70d79d155572853e90860dd87fabc9db4f96de74fcce0fbc7e853ca1d2e59d91" Jan 23 06:56:45 crc kubenswrapper[4937]: E0123 06:56:45.485966 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70d79d155572853e90860dd87fabc9db4f96de74fcce0fbc7e853ca1d2e59d91\": container with ID starting with 70d79d155572853e90860dd87fabc9db4f96de74fcce0fbc7e853ca1d2e59d91 not found: ID does not exist" containerID="70d79d155572853e90860dd87fabc9db4f96de74fcce0fbc7e853ca1d2e59d91" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.486001 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70d79d155572853e90860dd87fabc9db4f96de74fcce0fbc7e853ca1d2e59d91"} err="failed to get container status \"70d79d155572853e90860dd87fabc9db4f96de74fcce0fbc7e853ca1d2e59d91\": rpc error: code = NotFound desc = could not find container \"70d79d155572853e90860dd87fabc9db4f96de74fcce0fbc7e853ca1d2e59d91\": container with ID starting with 70d79d155572853e90860dd87fabc9db4f96de74fcce0fbc7e853ca1d2e59d91 not found: ID does not exist" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.554820 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.554866 4937 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fe28e0b-21c6-4ae2-8878-698855b69670-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.685132 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.694865 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.706927 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:56:45 crc kubenswrapper[4937]: E0123 06:56:45.707373 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fe28e0b-21c6-4ae2-8878-698855b69670" containerName="glance-log" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.707395 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fe28e0b-21c6-4ae2-8878-698855b69670" containerName="glance-log" Jan 23 06:56:45 crc kubenswrapper[4937]: E0123 06:56:45.707438 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fe28e0b-21c6-4ae2-8878-698855b69670" containerName="glance-httpd" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.707447 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fe28e0b-21c6-4ae2-8878-698855b69670" containerName="glance-httpd" Jan 23 06:56:45 crc kubenswrapper[4937]: E0123 06:56:45.707470 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="438bef39-6283-4b17-b551-74f127660dbd" containerName="watcher-decision-engine" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.707478 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="438bef39-6283-4b17-b551-74f127660dbd" containerName="watcher-decision-engine" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.707715 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="438bef39-6283-4b17-b551-74f127660dbd" containerName="watcher-decision-engine" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.707737 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fe28e0b-21c6-4ae2-8878-698855b69670" containerName="glance-log" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.707750 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fe28e0b-21c6-4ae2-8878-698855b69670" containerName="glance-httpd" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.708739 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.717104 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.717234 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.745525 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.865433 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.866045 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/533d5390-298e-4c44-9e48-6dd56773abd7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.866174 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533d5390-298e-4c44-9e48-6dd56773abd7-config-data\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.866322 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/533d5390-298e-4c44-9e48-6dd56773abd7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.866437 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztvnl\" (UniqueName: \"kubernetes.io/projected/533d5390-298e-4c44-9e48-6dd56773abd7-kube-api-access-ztvnl\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.866547 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533d5390-298e-4c44-9e48-6dd56773abd7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.866749 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/533d5390-298e-4c44-9e48-6dd56773abd7-scripts\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.866941 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/533d5390-298e-4c44-9e48-6dd56773abd7-logs\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.969138 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/533d5390-298e-4c44-9e48-6dd56773abd7-logs\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.969269 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.969320 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/533d5390-298e-4c44-9e48-6dd56773abd7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.969351 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533d5390-298e-4c44-9e48-6dd56773abd7-config-data\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.969401 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/533d5390-298e-4c44-9e48-6dd56773abd7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.969427 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztvnl\" (UniqueName: \"kubernetes.io/projected/533d5390-298e-4c44-9e48-6dd56773abd7-kube-api-access-ztvnl\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.969453 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533d5390-298e-4c44-9e48-6dd56773abd7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.969491 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/533d5390-298e-4c44-9e48-6dd56773abd7-scripts\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.970128 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.970409 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/533d5390-298e-4c44-9e48-6dd56773abd7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.970441 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/533d5390-298e-4c44-9e48-6dd56773abd7-logs\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.975206 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/533d5390-298e-4c44-9e48-6dd56773abd7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.977451 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/533d5390-298e-4c44-9e48-6dd56773abd7-config-data\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.983349 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/533d5390-298e-4c44-9e48-6dd56773abd7-scripts\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.985919 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533d5390-298e-4c44-9e48-6dd56773abd7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:45 crc kubenswrapper[4937]: I0123 06:56:45.995581 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztvnl\" (UniqueName: \"kubernetes.io/projected/533d5390-298e-4c44-9e48-6dd56773abd7-kube-api-access-ztvnl\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.009727 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"533d5390-298e-4c44-9e48-6dd56773abd7\") " pod="openstack/glance-default-external-api-0" Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.035384 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.393957 4937 generic.go:334] "Generic (PLEG): container finished" podID="08017a1f-9b75-4907-8e99-fbec667904c2" containerID="3fd318956397264b23fee02e5f42d27bb86bd013beddacad4cf7f1514dfaefa2" exitCode=0 Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.394056 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08017a1f-9b75-4907-8e99-fbec667904c2","Type":"ContainerDied","Data":"3fd318956397264b23fee02e5f42d27bb86bd013beddacad4cf7f1514dfaefa2"} Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.539463 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fe28e0b-21c6-4ae2-8878-698855b69670" path="/var/lib/kubelet/pods/7fe28e0b-21c6-4ae2-8878-698855b69670/volumes" Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.641171 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.836578 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.987733 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08017a1f-9b75-4907-8e99-fbec667904c2-httpd-run\") pod \"08017a1f-9b75-4907-8e99-fbec667904c2\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.987835 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-config-data\") pod \"08017a1f-9b75-4907-8e99-fbec667904c2\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.987970 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08017a1f-9b75-4907-8e99-fbec667904c2-logs\") pod \"08017a1f-9b75-4907-8e99-fbec667904c2\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.988004 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-internal-tls-certs\") pod \"08017a1f-9b75-4907-8e99-fbec667904c2\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.988563 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-scripts\") pod \"08017a1f-9b75-4907-8e99-fbec667904c2\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.988352 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08017a1f-9b75-4907-8e99-fbec667904c2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "08017a1f-9b75-4907-8e99-fbec667904c2" (UID: "08017a1f-9b75-4907-8e99-fbec667904c2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.988623 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-combined-ca-bundle\") pod \"08017a1f-9b75-4907-8e99-fbec667904c2\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.988491 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08017a1f-9b75-4907-8e99-fbec667904c2-logs" (OuterVolumeSpecName: "logs") pod "08017a1f-9b75-4907-8e99-fbec667904c2" (UID: "08017a1f-9b75-4907-8e99-fbec667904c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.988712 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r972\" (UniqueName: \"kubernetes.io/projected/08017a1f-9b75-4907-8e99-fbec667904c2-kube-api-access-4r972\") pod \"08017a1f-9b75-4907-8e99-fbec667904c2\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.988788 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"08017a1f-9b75-4907-8e99-fbec667904c2\" (UID: \"08017a1f-9b75-4907-8e99-fbec667904c2\") " Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.989424 4937 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/08017a1f-9b75-4907-8e99-fbec667904c2-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.989443 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08017a1f-9b75-4907-8e99-fbec667904c2-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.994168 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "08017a1f-9b75-4907-8e99-fbec667904c2" (UID: "08017a1f-9b75-4907-8e99-fbec667904c2"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.995308 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08017a1f-9b75-4907-8e99-fbec667904c2-kube-api-access-4r972" (OuterVolumeSpecName: "kube-api-access-4r972") pod "08017a1f-9b75-4907-8e99-fbec667904c2" (UID: "08017a1f-9b75-4907-8e99-fbec667904c2"). InnerVolumeSpecName "kube-api-access-4r972". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:46 crc kubenswrapper[4937]: I0123 06:56:46.999076 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-scripts" (OuterVolumeSpecName: "scripts") pod "08017a1f-9b75-4907-8e99-fbec667904c2" (UID: "08017a1f-9b75-4907-8e99-fbec667904c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.017825 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08017a1f-9b75-4907-8e99-fbec667904c2" (UID: "08017a1f-9b75-4907-8e99-fbec667904c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.043934 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-config-data" (OuterVolumeSpecName: "config-data") pod "08017a1f-9b75-4907-8e99-fbec667904c2" (UID: "08017a1f-9b75-4907-8e99-fbec667904c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.065630 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "08017a1f-9b75-4907-8e99-fbec667904c2" (UID: "08017a1f-9b75-4907-8e99-fbec667904c2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.091208 4937 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.091240 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.091249 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.091260 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r972\" (UniqueName: \"kubernetes.io/projected/08017a1f-9b75-4907-8e99-fbec667904c2-kube-api-access-4r972\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.091289 4937 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.091298 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08017a1f-9b75-4907-8e99-fbec667904c2-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.117106 4937 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.193095 4937 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.426079 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"533d5390-298e-4c44-9e48-6dd56773abd7","Type":"ContainerStarted","Data":"5812e6e9b30fa5768428c210220c19d2d6c2a6def9f3b5eadad3ffa496f6c8e6"} Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.426355 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"533d5390-298e-4c44-9e48-6dd56773abd7","Type":"ContainerStarted","Data":"1ec212a808c3c40f7c05770f3813933a5a5a0163257dc938963cd6cb8ab6a639"} Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.430771 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"08017a1f-9b75-4907-8e99-fbec667904c2","Type":"ContainerDied","Data":"013e2d90e79c3ba40834fd71084f277513e35b08baa3f2cc39949f11efa1a004"} Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.430978 4937 scope.go:117] "RemoveContainer" containerID="3fd318956397264b23fee02e5f42d27bb86bd013beddacad4cf7f1514dfaefa2" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.430841 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.472063 4937 scope.go:117] "RemoveContainer" containerID="ebcd2bf40ed446a7c2f13c548704415cf5ae69e0cb8dc64875bcde8813450de8" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.493355 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.513262 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.526257 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:56:47 crc kubenswrapper[4937]: E0123 06:56:47.527404 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08017a1f-9b75-4907-8e99-fbec667904c2" containerName="glance-log" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.527428 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="08017a1f-9b75-4907-8e99-fbec667904c2" containerName="glance-log" Jan 23 06:56:47 crc kubenswrapper[4937]: E0123 06:56:47.527446 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08017a1f-9b75-4907-8e99-fbec667904c2" containerName="glance-httpd" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.527453 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="08017a1f-9b75-4907-8e99-fbec667904c2" containerName="glance-httpd" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.527669 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="08017a1f-9b75-4907-8e99-fbec667904c2" containerName="glance-httpd" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.527695 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="08017a1f-9b75-4907-8e99-fbec667904c2" containerName="glance-log" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.528644 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.532623 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.532924 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.593942 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.604334 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.604537 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8b82ffa-578a-490a-8d72-637c6236e89d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.604570 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8b82ffa-578a-490a-8d72-637c6236e89d-logs\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.604685 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bt4b\" (UniqueName: \"kubernetes.io/projected/d8b82ffa-578a-490a-8d72-637c6236e89d-kube-api-access-8bt4b\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.604854 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8b82ffa-578a-490a-8d72-637c6236e89d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.604887 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b82ffa-578a-490a-8d72-637c6236e89d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.605221 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b82ffa-578a-490a-8d72-637c6236e89d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.605245 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8b82ffa-578a-490a-8d72-637c6236e89d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.708398 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b82ffa-578a-490a-8d72-637c6236e89d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.708454 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8b82ffa-578a-490a-8d72-637c6236e89d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.708503 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.708551 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8b82ffa-578a-490a-8d72-637c6236e89d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.708567 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8b82ffa-578a-490a-8d72-637c6236e89d-logs\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.708658 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bt4b\" (UniqueName: \"kubernetes.io/projected/d8b82ffa-578a-490a-8d72-637c6236e89d-kube-api-access-8bt4b\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.708735 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8b82ffa-578a-490a-8d72-637c6236e89d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.708769 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b82ffa-578a-490a-8d72-637c6236e89d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.709473 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8b82ffa-578a-490a-8d72-637c6236e89d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.709536 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8b82ffa-578a-490a-8d72-637c6236e89d-logs\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.709663 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.713580 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8b82ffa-578a-490a-8d72-637c6236e89d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.714582 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8b82ffa-578a-490a-8d72-637c6236e89d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.714883 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8b82ffa-578a-490a-8d72-637c6236e89d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.723809 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8b82ffa-578a-490a-8d72-637c6236e89d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.736095 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bt4b\" (UniqueName: \"kubernetes.io/projected/d8b82ffa-578a-490a-8d72-637c6236e89d-kube-api-access-8bt4b\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.742731 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"d8b82ffa-578a-490a-8d72-637c6236e89d\") " pod="openstack/glance-default-internal-api-0" Jan 23 06:56:47 crc kubenswrapper[4937]: I0123 06:56:47.905839 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:48 crc kubenswrapper[4937]: I0123 06:56:48.444947 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"533d5390-298e-4c44-9e48-6dd56773abd7","Type":"ContainerStarted","Data":"7e672d3813a064268a9541639f0d4a9639a04021c2fb5ca7f2746f477b8a81f2"} Jan 23 06:56:48 crc kubenswrapper[4937]: I0123 06:56:48.470410 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 06:56:48 crc kubenswrapper[4937]: I0123 06:56:48.479608 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.479440231 podStartE2EDuration="3.479440231s" podCreationTimestamp="2026-01-23 06:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:56:48.463921411 +0000 UTC m=+1408.267688064" watchObservedRunningTime="2026-01-23 06:56:48.479440231 +0000 UTC m=+1408.283206894" Jan 23 06:56:48 crc kubenswrapper[4937]: I0123 06:56:48.537586 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08017a1f-9b75-4907-8e99-fbec667904c2" path="/var/lib/kubelet/pods/08017a1f-9b75-4907-8e99-fbec667904c2/volumes" Jan 23 06:56:49 crc kubenswrapper[4937]: I0123 06:56:49.459700 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d8b82ffa-578a-490a-8d72-637c6236e89d","Type":"ContainerStarted","Data":"26601f5aa83765d74d2406ec331daae2b4862eba6f7597e62fe0af817e5c87c5"} Jan 23 06:56:49 crc kubenswrapper[4937]: I0123 06:56:49.460253 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d8b82ffa-578a-490a-8d72-637c6236e89d","Type":"ContainerStarted","Data":"3e323215147d19d58afff33c933400182d917c4a0af504761e20446e4352f4d4"} Jan 23 06:56:50 crc kubenswrapper[4937]: I0123 06:56:50.467844 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d8b82ffa-578a-490a-8d72-637c6236e89d","Type":"ContainerStarted","Data":"1e47f6637ceb26f9684c119ed1cdc4f6e7019aed89f4a1f6ba1940d249a14524"} Jan 23 06:56:50 crc kubenswrapper[4937]: I0123 06:56:50.470359 4937 generic.go:334] "Generic (PLEG): container finished" podID="e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3" containerID="da55b988ef3e3f777e74f987057d7267952014001749ab4eba172f45c88978e7" exitCode=0 Jan 23 06:56:50 crc kubenswrapper[4937]: I0123 06:56:50.470390 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-26qqm" event={"ID":"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3","Type":"ContainerDied","Data":"da55b988ef3e3f777e74f987057d7267952014001749ab4eba172f45c88978e7"} Jan 23 06:56:50 crc kubenswrapper[4937]: I0123 06:56:50.489078 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.489055927 podStartE2EDuration="3.489055927s" podCreationTimestamp="2026-01-23 06:56:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:56:50.487389202 +0000 UTC m=+1410.291155875" watchObservedRunningTime="2026-01-23 06:56:50.489055927 +0000 UTC m=+1410.292822580" Jan 23 06:56:51 crc kubenswrapper[4937]: I0123 06:56:51.836942 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-26qqm" Jan 23 06:56:51 crc kubenswrapper[4937]: I0123 06:56:51.905176 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-config-data\") pod \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\" (UID: \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\") " Jan 23 06:56:51 crc kubenswrapper[4937]: I0123 06:56:51.905295 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-combined-ca-bundle\") pod \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\" (UID: \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\") " Jan 23 06:56:51 crc kubenswrapper[4937]: I0123 06:56:51.905429 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-scripts\") pod \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\" (UID: \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\") " Jan 23 06:56:51 crc kubenswrapper[4937]: I0123 06:56:51.905521 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmhkm\" (UniqueName: \"kubernetes.io/projected/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-kube-api-access-tmhkm\") pod \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\" (UID: \"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3\") " Jan 23 06:56:51 crc kubenswrapper[4937]: I0123 06:56:51.911227 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-kube-api-access-tmhkm" (OuterVolumeSpecName: "kube-api-access-tmhkm") pod "e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3" (UID: "e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3"). InnerVolumeSpecName "kube-api-access-tmhkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:51 crc kubenswrapper[4937]: I0123 06:56:51.911700 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-scripts" (OuterVolumeSpecName: "scripts") pod "e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3" (UID: "e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:51 crc kubenswrapper[4937]: I0123 06:56:51.938607 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-config-data" (OuterVolumeSpecName: "config-data") pod "e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3" (UID: "e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:51 crc kubenswrapper[4937]: I0123 06:56:51.938649 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3" (UID: "e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.007646 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.007695 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.007708 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.007717 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmhkm\" (UniqueName: \"kubernetes.io/projected/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3-kube-api-access-tmhkm\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.496032 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-26qqm" event={"ID":"e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3","Type":"ContainerDied","Data":"1474dccdb0d4b5d84cca6244b233ef377db34db37d69f49f0df1255dc2dd950f"} Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.496066 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1474dccdb0d4b5d84cca6244b233ef377db34db37d69f49f0df1255dc2dd950f" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.496317 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-26qqm" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.608311 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 06:56:52 crc kubenswrapper[4937]: E0123 06:56:52.608862 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3" containerName="nova-cell0-conductor-db-sync" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.608889 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3" containerName="nova-cell0-conductor-db-sync" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.609110 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3" containerName="nova-cell0-conductor-db-sync" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.609985 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.611529 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.613688 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-qsw8t" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.618546 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.708103 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.720078 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnjrv\" (UniqueName: \"kubernetes.io/projected/2b8603ce-424c-44c2-87e7-b6e20f39d6e0-kube-api-access-bnjrv\") pod \"nova-cell0-conductor-0\" (UID: \"2b8603ce-424c-44c2-87e7-b6e20f39d6e0\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.720272 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b8603ce-424c-44c2-87e7-b6e20f39d6e0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2b8603ce-424c-44c2-87e7-b6e20f39d6e0\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.720324 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b8603ce-424c-44c2-87e7-b6e20f39d6e0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2b8603ce-424c-44c2-87e7-b6e20f39d6e0\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.732787 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.822075 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b8603ce-424c-44c2-87e7-b6e20f39d6e0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2b8603ce-424c-44c2-87e7-b6e20f39d6e0\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.822151 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b8603ce-424c-44c2-87e7-b6e20f39d6e0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2b8603ce-424c-44c2-87e7-b6e20f39d6e0\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.822224 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnjrv\" (UniqueName: \"kubernetes.io/projected/2b8603ce-424c-44c2-87e7-b6e20f39d6e0-kube-api-access-bnjrv\") pod \"nova-cell0-conductor-0\" (UID: \"2b8603ce-424c-44c2-87e7-b6e20f39d6e0\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.827773 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b8603ce-424c-44c2-87e7-b6e20f39d6e0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2b8603ce-424c-44c2-87e7-b6e20f39d6e0\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.829238 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b8603ce-424c-44c2-87e7-b6e20f39d6e0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2b8603ce-424c-44c2-87e7-b6e20f39d6e0\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.850275 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnjrv\" (UniqueName: \"kubernetes.io/projected/2b8603ce-424c-44c2-87e7-b6e20f39d6e0-kube-api-access-bnjrv\") pod \"nova-cell0-conductor-0\" (UID: \"2b8603ce-424c-44c2-87e7-b6e20f39d6e0\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:52 crc kubenswrapper[4937]: I0123 06:56:52.938750 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:53 crc kubenswrapper[4937]: I0123 06:56:53.388833 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 06:56:53 crc kubenswrapper[4937]: W0123 06:56:53.399107 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b8603ce_424c_44c2_87e7_b6e20f39d6e0.slice/crio-fefa4dded7ece1c82451cc8dc9385062bb8651f694d014a595186e7f5a068602 WatchSource:0}: Error finding container fefa4dded7ece1c82451cc8dc9385062bb8651f694d014a595186e7f5a068602: Status 404 returned error can't find the container with id fefa4dded7ece1c82451cc8dc9385062bb8651f694d014a595186e7f5a068602 Jan 23 06:56:53 crc kubenswrapper[4937]: I0123 06:56:53.509834 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2b8603ce-424c-44c2-87e7-b6e20f39d6e0","Type":"ContainerStarted","Data":"fefa4dded7ece1c82451cc8dc9385062bb8651f694d014a595186e7f5a068602"} Jan 23 06:56:53 crc kubenswrapper[4937]: I0123 06:56:53.510021 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 23 06:56:53 crc kubenswrapper[4937]: I0123 06:56:53.607632 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 23 06:56:54 crc kubenswrapper[4937]: I0123 06:56:54.520718 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2b8603ce-424c-44c2-87e7-b6e20f39d6e0","Type":"ContainerStarted","Data":"c1be938039f52e4ced564e01e3162e67501920f0d1aeea265f07d17b15c0c9f7"} Jan 23 06:56:54 crc kubenswrapper[4937]: I0123 06:56:54.521192 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:54 crc kubenswrapper[4937]: I0123 06:56:54.543494 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.543475765 podStartE2EDuration="2.543475765s" podCreationTimestamp="2026-01-23 06:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:56:54.539723123 +0000 UTC m=+1414.343489776" watchObservedRunningTime="2026-01-23 06:56:54.543475765 +0000 UTC m=+1414.347242418" Jan 23 06:56:56 crc kubenswrapper[4937]: I0123 06:56:56.036464 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 06:56:56 crc kubenswrapper[4937]: I0123 06:56:56.036524 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 06:56:56 crc kubenswrapper[4937]: I0123 06:56:56.076875 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 06:56:56 crc kubenswrapper[4937]: I0123 06:56:56.105113 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 06:56:56 crc kubenswrapper[4937]: I0123 06:56:56.539425 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 06:56:56 crc kubenswrapper[4937]: I0123 06:56:56.539758 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 06:56:57 crc kubenswrapper[4937]: I0123 06:56:57.907057 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:57 crc kubenswrapper[4937]: I0123 06:56:57.907393 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:57 crc kubenswrapper[4937]: I0123 06:56:57.940862 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:57 crc kubenswrapper[4937]: I0123 06:56:57.954669 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:57 crc kubenswrapper[4937]: I0123 06:56:57.992517 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 06:56:57 crc kubenswrapper[4937]: I0123 06:56:57.992802 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="2b8603ce-424c-44c2-87e7-b6e20f39d6e0" containerName="nova-cell0-conductor-conductor" containerID="cri-o://c1be938039f52e4ced564e01e3162e67501920f0d1aeea265f07d17b15c0c9f7" gracePeriod=30 Jan 23 06:56:58 crc kubenswrapper[4937]: I0123 06:56:58.436807 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 06:56:58 crc kubenswrapper[4937]: I0123 06:56:58.438884 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 06:56:58 crc kubenswrapper[4937]: I0123 06:56:58.559445 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:58 crc kubenswrapper[4937]: I0123 06:56:58.559488 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.050814 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.121887 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnjrv\" (UniqueName: \"kubernetes.io/projected/2b8603ce-424c-44c2-87e7-b6e20f39d6e0-kube-api-access-bnjrv\") pod \"2b8603ce-424c-44c2-87e7-b6e20f39d6e0\" (UID: \"2b8603ce-424c-44c2-87e7-b6e20f39d6e0\") " Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.122049 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b8603ce-424c-44c2-87e7-b6e20f39d6e0-combined-ca-bundle\") pod \"2b8603ce-424c-44c2-87e7-b6e20f39d6e0\" (UID: \"2b8603ce-424c-44c2-87e7-b6e20f39d6e0\") " Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.122182 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b8603ce-424c-44c2-87e7-b6e20f39d6e0-config-data\") pod \"2b8603ce-424c-44c2-87e7-b6e20f39d6e0\" (UID: \"2b8603ce-424c-44c2-87e7-b6e20f39d6e0\") " Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.134536 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b8603ce-424c-44c2-87e7-b6e20f39d6e0-kube-api-access-bnjrv" (OuterVolumeSpecName: "kube-api-access-bnjrv") pod "2b8603ce-424c-44c2-87e7-b6e20f39d6e0" (UID: "2b8603ce-424c-44c2-87e7-b6e20f39d6e0"). InnerVolumeSpecName "kube-api-access-bnjrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.160702 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b8603ce-424c-44c2-87e7-b6e20f39d6e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b8603ce-424c-44c2-87e7-b6e20f39d6e0" (UID: "2b8603ce-424c-44c2-87e7-b6e20f39d6e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.180893 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b8603ce-424c-44c2-87e7-b6e20f39d6e0-config-data" (OuterVolumeSpecName: "config-data") pod "2b8603ce-424c-44c2-87e7-b6e20f39d6e0" (UID: "2b8603ce-424c-44c2-87e7-b6e20f39d6e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.225276 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b8603ce-424c-44c2-87e7-b6e20f39d6e0-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.225419 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnjrv\" (UniqueName: \"kubernetes.io/projected/2b8603ce-424c-44c2-87e7-b6e20f39d6e0-kube-api-access-bnjrv\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.225433 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b8603ce-424c-44c2-87e7-b6e20f39d6e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.577387 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b8603ce-424c-44c2-87e7-b6e20f39d6e0" containerID="c1be938039f52e4ced564e01e3162e67501920f0d1aeea265f07d17b15c0c9f7" exitCode=0 Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.578418 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.581822 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2b8603ce-424c-44c2-87e7-b6e20f39d6e0","Type":"ContainerDied","Data":"c1be938039f52e4ced564e01e3162e67501920f0d1aeea265f07d17b15c0c9f7"} Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.581883 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2b8603ce-424c-44c2-87e7-b6e20f39d6e0","Type":"ContainerDied","Data":"fefa4dded7ece1c82451cc8dc9385062bb8651f694d014a595186e7f5a068602"} Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.583308 4937 scope.go:117] "RemoveContainer" containerID="c1be938039f52e4ced564e01e3162e67501920f0d1aeea265f07d17b15c0c9f7" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.611852 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.623742 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.624770 4937 scope.go:117] "RemoveContainer" containerID="c1be938039f52e4ced564e01e3162e67501920f0d1aeea265f07d17b15c0c9f7" Jan 23 06:56:59 crc kubenswrapper[4937]: E0123 06:56:59.625233 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1be938039f52e4ced564e01e3162e67501920f0d1aeea265f07d17b15c0c9f7\": container with ID starting with c1be938039f52e4ced564e01e3162e67501920f0d1aeea265f07d17b15c0c9f7 not found: ID does not exist" containerID="c1be938039f52e4ced564e01e3162e67501920f0d1aeea265f07d17b15c0c9f7" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.625280 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1be938039f52e4ced564e01e3162e67501920f0d1aeea265f07d17b15c0c9f7"} err="failed to get container status \"c1be938039f52e4ced564e01e3162e67501920f0d1aeea265f07d17b15c0c9f7\": rpc error: code = NotFound desc = could not find container \"c1be938039f52e4ced564e01e3162e67501920f0d1aeea265f07d17b15c0c9f7\": container with ID starting with c1be938039f52e4ced564e01e3162e67501920f0d1aeea265f07d17b15c0c9f7 not found: ID does not exist" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.660666 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 06:56:59 crc kubenswrapper[4937]: E0123 06:56:59.661180 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b8603ce-424c-44c2-87e7-b6e20f39d6e0" containerName="nova-cell0-conductor-conductor" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.661196 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b8603ce-424c-44c2-87e7-b6e20f39d6e0" containerName="nova-cell0-conductor-conductor" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.661445 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b8603ce-424c-44c2-87e7-b6e20f39d6e0" containerName="nova-cell0-conductor-conductor" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.662256 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.666271 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-qsw8t" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.666384 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.668611 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.737004 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ec145bc-8e86-4bd4-9741-f8f7512c5f3c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5ec145bc-8e86-4bd4-9741-f8f7512c5f3c\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.737051 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec145bc-8e86-4bd4-9741-f8f7512c5f3c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5ec145bc-8e86-4bd4-9741-f8f7512c5f3c\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.737187 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qg9m\" (UniqueName: \"kubernetes.io/projected/5ec145bc-8e86-4bd4-9741-f8f7512c5f3c-kube-api-access-6qg9m\") pod \"nova-cell0-conductor-0\" (UID: \"5ec145bc-8e86-4bd4-9741-f8f7512c5f3c\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.838851 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ec145bc-8e86-4bd4-9741-f8f7512c5f3c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5ec145bc-8e86-4bd4-9741-f8f7512c5f3c\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.838887 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec145bc-8e86-4bd4-9741-f8f7512c5f3c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5ec145bc-8e86-4bd4-9741-f8f7512c5f3c\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.838974 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qg9m\" (UniqueName: \"kubernetes.io/projected/5ec145bc-8e86-4bd4-9741-f8f7512c5f3c-kube-api-access-6qg9m\") pod \"nova-cell0-conductor-0\" (UID: \"5ec145bc-8e86-4bd4-9741-f8f7512c5f3c\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.844308 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec145bc-8e86-4bd4-9741-f8f7512c5f3c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5ec145bc-8e86-4bd4-9741-f8f7512c5f3c\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.844628 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ec145bc-8e86-4bd4-9741-f8f7512c5f3c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5ec145bc-8e86-4bd4-9741-f8f7512c5f3c\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.871182 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qg9m\" (UniqueName: \"kubernetes.io/projected/5ec145bc-8e86-4bd4-9741-f8f7512c5f3c-kube-api-access-6qg9m\") pod \"nova-cell0-conductor-0\" (UID: \"5ec145bc-8e86-4bd4-9741-f8f7512c5f3c\") " pod="openstack/nova-cell0-conductor-0" Jan 23 06:56:59 crc kubenswrapper[4937]: I0123 06:56:59.990172 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 06:57:00 crc kubenswrapper[4937]: I0123 06:57:00.476425 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 06:57:00 crc kubenswrapper[4937]: I0123 06:57:00.493459 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 06:57:00 crc kubenswrapper[4937]: I0123 06:57:00.540695 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b8603ce-424c-44c2-87e7-b6e20f39d6e0" path="/var/lib/kubelet/pods/2b8603ce-424c-44c2-87e7-b6e20f39d6e0/volumes" Jan 23 06:57:00 crc kubenswrapper[4937]: I0123 06:57:00.591633 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5ec145bc-8e86-4bd4-9741-f8f7512c5f3c","Type":"ContainerStarted","Data":"e6a19f521eeaf8a4c113d76dfa9af6741824429f7c36e433210e4689058721ac"} Jan 23 06:57:00 crc kubenswrapper[4937]: I0123 06:57:00.591742 4937 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 06:57:00 crc kubenswrapper[4937]: I0123 06:57:00.630491 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 06:57:01 crc kubenswrapper[4937]: I0123 06:57:01.606974 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5ec145bc-8e86-4bd4-9741-f8f7512c5f3c","Type":"ContainerStarted","Data":"8c4694ce713b678329a14be7c61eb6a036e35666ea1db2f7883b6464f389f8a7"} Jan 23 06:57:01 crc kubenswrapper[4937]: I0123 06:57:01.609183 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 23 06:57:01 crc kubenswrapper[4937]: I0123 06:57:01.639676 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.6396569579999998 podStartE2EDuration="2.639656958s" podCreationTimestamp="2026-01-23 06:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:57:01.632467363 +0000 UTC m=+1421.436234056" watchObservedRunningTime="2026-01-23 06:57:01.639656958 +0000 UTC m=+1421.443423621" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.627182 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.659990 4937 generic.go:334] "Generic (PLEG): container finished" podID="ccd98200-80ca-4223-aa9a-720238626619" containerID="a72df44afe9461ea4e60c9512c3b40368ad6474515e7b1f5b55563e001ca3fa1" exitCode=137 Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.660271 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccd98200-80ca-4223-aa9a-720238626619","Type":"ContainerDied","Data":"a72df44afe9461ea4e60c9512c3b40368ad6474515e7b1f5b55563e001ca3fa1"} Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.660361 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ccd98200-80ca-4223-aa9a-720238626619","Type":"ContainerDied","Data":"d3b8cfb6307d6d3120f7b1ffe185cdf5ef9156a0e4364144f89240d667610cdf"} Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.660445 4937 scope.go:117] "RemoveContainer" containerID="a72df44afe9461ea4e60c9512c3b40368ad6474515e7b1f5b55563e001ca3fa1" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.660708 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.681883 4937 scope.go:117] "RemoveContainer" containerID="c865b68893963b0227fdfc8fc1c80c7a08702a9eb20e7215684c75932d4a91e0" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.701001 4937 scope.go:117] "RemoveContainer" containerID="b1ca4e5fd284cd5afa2f79f95e8f46e0d50a9e8c482e04555985a7a0cbf35a05" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.716905 4937 scope.go:117] "RemoveContainer" containerID="f61c9ef70e1617460cf315941b67c3ea9ede154d98a8be37e1d577f3c6f8e6cf" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.732670 4937 scope.go:117] "RemoveContainer" containerID="a72df44afe9461ea4e60c9512c3b40368ad6474515e7b1f5b55563e001ca3fa1" Jan 23 06:57:04 crc kubenswrapper[4937]: E0123 06:57:04.732990 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a72df44afe9461ea4e60c9512c3b40368ad6474515e7b1f5b55563e001ca3fa1\": container with ID starting with a72df44afe9461ea4e60c9512c3b40368ad6474515e7b1f5b55563e001ca3fa1 not found: ID does not exist" containerID="a72df44afe9461ea4e60c9512c3b40368ad6474515e7b1f5b55563e001ca3fa1" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.733079 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a72df44afe9461ea4e60c9512c3b40368ad6474515e7b1f5b55563e001ca3fa1"} err="failed to get container status \"a72df44afe9461ea4e60c9512c3b40368ad6474515e7b1f5b55563e001ca3fa1\": rpc error: code = NotFound desc = could not find container \"a72df44afe9461ea4e60c9512c3b40368ad6474515e7b1f5b55563e001ca3fa1\": container with ID starting with a72df44afe9461ea4e60c9512c3b40368ad6474515e7b1f5b55563e001ca3fa1 not found: ID does not exist" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.733207 4937 scope.go:117] "RemoveContainer" containerID="c865b68893963b0227fdfc8fc1c80c7a08702a9eb20e7215684c75932d4a91e0" Jan 23 06:57:04 crc kubenswrapper[4937]: E0123 06:57:04.733504 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c865b68893963b0227fdfc8fc1c80c7a08702a9eb20e7215684c75932d4a91e0\": container with ID starting with c865b68893963b0227fdfc8fc1c80c7a08702a9eb20e7215684c75932d4a91e0 not found: ID does not exist" containerID="c865b68893963b0227fdfc8fc1c80c7a08702a9eb20e7215684c75932d4a91e0" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.733531 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c865b68893963b0227fdfc8fc1c80c7a08702a9eb20e7215684c75932d4a91e0"} err="failed to get container status \"c865b68893963b0227fdfc8fc1c80c7a08702a9eb20e7215684c75932d4a91e0\": rpc error: code = NotFound desc = could not find container \"c865b68893963b0227fdfc8fc1c80c7a08702a9eb20e7215684c75932d4a91e0\": container with ID starting with c865b68893963b0227fdfc8fc1c80c7a08702a9eb20e7215684c75932d4a91e0 not found: ID does not exist" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.733549 4937 scope.go:117] "RemoveContainer" containerID="b1ca4e5fd284cd5afa2f79f95e8f46e0d50a9e8c482e04555985a7a0cbf35a05" Jan 23 06:57:04 crc kubenswrapper[4937]: E0123 06:57:04.733849 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1ca4e5fd284cd5afa2f79f95e8f46e0d50a9e8c482e04555985a7a0cbf35a05\": container with ID starting with b1ca4e5fd284cd5afa2f79f95e8f46e0d50a9e8c482e04555985a7a0cbf35a05 not found: ID does not exist" containerID="b1ca4e5fd284cd5afa2f79f95e8f46e0d50a9e8c482e04555985a7a0cbf35a05" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.733877 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1ca4e5fd284cd5afa2f79f95e8f46e0d50a9e8c482e04555985a7a0cbf35a05"} err="failed to get container status \"b1ca4e5fd284cd5afa2f79f95e8f46e0d50a9e8c482e04555985a7a0cbf35a05\": rpc error: code = NotFound desc = could not find container \"b1ca4e5fd284cd5afa2f79f95e8f46e0d50a9e8c482e04555985a7a0cbf35a05\": container with ID starting with b1ca4e5fd284cd5afa2f79f95e8f46e0d50a9e8c482e04555985a7a0cbf35a05 not found: ID does not exist" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.733889 4937 scope.go:117] "RemoveContainer" containerID="f61c9ef70e1617460cf315941b67c3ea9ede154d98a8be37e1d577f3c6f8e6cf" Jan 23 06:57:04 crc kubenswrapper[4937]: E0123 06:57:04.734084 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f61c9ef70e1617460cf315941b67c3ea9ede154d98a8be37e1d577f3c6f8e6cf\": container with ID starting with f61c9ef70e1617460cf315941b67c3ea9ede154d98a8be37e1d577f3c6f8e6cf not found: ID does not exist" containerID="f61c9ef70e1617460cf315941b67c3ea9ede154d98a8be37e1d577f3c6f8e6cf" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.734159 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61c9ef70e1617460cf315941b67c3ea9ede154d98a8be37e1d577f3c6f8e6cf"} err="failed to get container status \"f61c9ef70e1617460cf315941b67c3ea9ede154d98a8be37e1d577f3c6f8e6cf\": rpc error: code = NotFound desc = could not find container \"f61c9ef70e1617460cf315941b67c3ea9ede154d98a8be37e1d577f3c6f8e6cf\": container with ID starting with f61c9ef70e1617460cf315941b67c3ea9ede154d98a8be37e1d577f3c6f8e6cf not found: ID does not exist" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.755104 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-config-data\") pod \"ccd98200-80ca-4223-aa9a-720238626619\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.755280 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-combined-ca-bundle\") pod \"ccd98200-80ca-4223-aa9a-720238626619\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.755440 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccd98200-80ca-4223-aa9a-720238626619-run-httpd\") pod \"ccd98200-80ca-4223-aa9a-720238626619\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.756028 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccd98200-80ca-4223-aa9a-720238626619-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ccd98200-80ca-4223-aa9a-720238626619" (UID: "ccd98200-80ca-4223-aa9a-720238626619"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.756274 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-sg-core-conf-yaml\") pod \"ccd98200-80ca-4223-aa9a-720238626619\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.756684 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-scripts\") pod \"ccd98200-80ca-4223-aa9a-720238626619\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.756793 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crzst\" (UniqueName: \"kubernetes.io/projected/ccd98200-80ca-4223-aa9a-720238626619-kube-api-access-crzst\") pod \"ccd98200-80ca-4223-aa9a-720238626619\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.756894 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccd98200-80ca-4223-aa9a-720238626619-log-httpd\") pod \"ccd98200-80ca-4223-aa9a-720238626619\" (UID: \"ccd98200-80ca-4223-aa9a-720238626619\") " Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.757539 4937 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccd98200-80ca-4223-aa9a-720238626619-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.757765 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccd98200-80ca-4223-aa9a-720238626619-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ccd98200-80ca-4223-aa9a-720238626619" (UID: "ccd98200-80ca-4223-aa9a-720238626619"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.760393 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-scripts" (OuterVolumeSpecName: "scripts") pod "ccd98200-80ca-4223-aa9a-720238626619" (UID: "ccd98200-80ca-4223-aa9a-720238626619"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.765836 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccd98200-80ca-4223-aa9a-720238626619-kube-api-access-crzst" (OuterVolumeSpecName: "kube-api-access-crzst") pod "ccd98200-80ca-4223-aa9a-720238626619" (UID: "ccd98200-80ca-4223-aa9a-720238626619"). InnerVolumeSpecName "kube-api-access-crzst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.780899 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ccd98200-80ca-4223-aa9a-720238626619" (UID: "ccd98200-80ca-4223-aa9a-720238626619"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.826747 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ccd98200-80ca-4223-aa9a-720238626619" (UID: "ccd98200-80ca-4223-aa9a-720238626619"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.855794 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-config-data" (OuterVolumeSpecName: "config-data") pod "ccd98200-80ca-4223-aa9a-720238626619" (UID: "ccd98200-80ca-4223-aa9a-720238626619"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.859500 4937 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.859534 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.859548 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crzst\" (UniqueName: \"kubernetes.io/projected/ccd98200-80ca-4223-aa9a-720238626619-kube-api-access-crzst\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.859564 4937 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccd98200-80ca-4223-aa9a-720238626619-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.859575 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:04 crc kubenswrapper[4937]: I0123 06:57:04.859586 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd98200-80ca-4223-aa9a-720238626619-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.022636 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.036813 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.055115 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:57:05 crc kubenswrapper[4937]: E0123 06:57:05.055639 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccd98200-80ca-4223-aa9a-720238626619" containerName="ceilometer-central-agent" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.055665 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccd98200-80ca-4223-aa9a-720238626619" containerName="ceilometer-central-agent" Jan 23 06:57:05 crc kubenswrapper[4937]: E0123 06:57:05.055690 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccd98200-80ca-4223-aa9a-720238626619" containerName="proxy-httpd" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.055699 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccd98200-80ca-4223-aa9a-720238626619" containerName="proxy-httpd" Jan 23 06:57:05 crc kubenswrapper[4937]: E0123 06:57:05.055714 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccd98200-80ca-4223-aa9a-720238626619" containerName="ceilometer-notification-agent" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.055722 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccd98200-80ca-4223-aa9a-720238626619" containerName="ceilometer-notification-agent" Jan 23 06:57:05 crc kubenswrapper[4937]: E0123 06:57:05.055744 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccd98200-80ca-4223-aa9a-720238626619" containerName="sg-core" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.055754 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccd98200-80ca-4223-aa9a-720238626619" containerName="sg-core" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.055972 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccd98200-80ca-4223-aa9a-720238626619" containerName="proxy-httpd" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.055999 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccd98200-80ca-4223-aa9a-720238626619" containerName="sg-core" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.056010 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccd98200-80ca-4223-aa9a-720238626619" containerName="ceilometer-central-agent" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.056030 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccd98200-80ca-4223-aa9a-720238626619" containerName="ceilometer-notification-agent" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.058265 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.060782 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.062269 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.093478 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.165441 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddce763-a91f-4529-b73a-262f9d020da1-run-httpd\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.165666 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddce763-a91f-4529-b73a-262f9d020da1-log-httpd\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.165743 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.165932 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnv49\" (UniqueName: \"kubernetes.io/projected/5ddce763-a91f-4529-b73a-262f9d020da1-kube-api-access-xnv49\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.165960 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.166063 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-config-data\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.166130 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-scripts\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.268527 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddce763-a91f-4529-b73a-262f9d020da1-log-httpd\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.268586 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.268823 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnv49\" (UniqueName: \"kubernetes.io/projected/5ddce763-a91f-4529-b73a-262f9d020da1-kube-api-access-xnv49\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.268853 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.268873 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-config-data\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.268893 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-scripts\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.268932 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddce763-a91f-4529-b73a-262f9d020da1-run-httpd\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.269130 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddce763-a91f-4529-b73a-262f9d020da1-log-httpd\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.269347 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddce763-a91f-4529-b73a-262f9d020da1-run-httpd\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.272772 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.272781 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-scripts\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.273158 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-config-data\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.273734 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.290796 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnv49\" (UniqueName: \"kubernetes.io/projected/5ddce763-a91f-4529-b73a-262f9d020da1-kube-api-access-xnv49\") pod \"ceilometer-0\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.396201 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:57:05 crc kubenswrapper[4937]: I0123 06:57:05.830403 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:57:05 crc kubenswrapper[4937]: W0123 06:57:05.830446 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ddce763_a91f_4529_b73a_262f9d020da1.slice/crio-cfafb4456254c74edfc50e26a6440d3168fceeac6f9a369dcc02137d3b37ad85 WatchSource:0}: Error finding container cfafb4456254c74edfc50e26a6440d3168fceeac6f9a369dcc02137d3b37ad85: Status 404 returned error can't find the container with id cfafb4456254c74edfc50e26a6440d3168fceeac6f9a369dcc02137d3b37ad85 Jan 23 06:57:06 crc kubenswrapper[4937]: I0123 06:57:06.538396 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccd98200-80ca-4223-aa9a-720238626619" path="/var/lib/kubelet/pods/ccd98200-80ca-4223-aa9a-720238626619/volumes" Jan 23 06:57:06 crc kubenswrapper[4937]: I0123 06:57:06.681528 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddce763-a91f-4529-b73a-262f9d020da1","Type":"ContainerStarted","Data":"a7e9306a302e9bf5ffb0691f5458acaefe94468bb3d34107c55c6a5602bc9064"} Jan 23 06:57:06 crc kubenswrapper[4937]: I0123 06:57:06.681570 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddce763-a91f-4529-b73a-262f9d020da1","Type":"ContainerStarted","Data":"909dc20239e5d83f0a83f6d5c33ed13b3a02b383cb323e048bdac35639476fca"} Jan 23 06:57:06 crc kubenswrapper[4937]: I0123 06:57:06.681581 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddce763-a91f-4529-b73a-262f9d020da1","Type":"ContainerStarted","Data":"cfafb4456254c74edfc50e26a6440d3168fceeac6f9a369dcc02137d3b37ad85"} Jan 23 06:57:07 crc kubenswrapper[4937]: I0123 06:57:07.697748 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddce763-a91f-4529-b73a-262f9d020da1","Type":"ContainerStarted","Data":"30db52f103a940e12c4f31d4a915913a1aa105539d3c21da162885ca46e4618f"} Jan 23 06:57:08 crc kubenswrapper[4937]: I0123 06:57:08.708770 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddce763-a91f-4529-b73a-262f9d020da1","Type":"ContainerStarted","Data":"4bbbc3f2700785d4b7ed7046358ecb04efebd765ee73f90e906aeb3a05e67126"} Jan 23 06:57:08 crc kubenswrapper[4937]: I0123 06:57:08.709059 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 06:57:08 crc kubenswrapper[4937]: I0123 06:57:08.728675 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.394530426 podStartE2EDuration="3.728654326s" podCreationTimestamp="2026-01-23 06:57:05 +0000 UTC" firstStartedPulling="2026-01-23 06:57:05.834580166 +0000 UTC m=+1425.638346839" lastFinishedPulling="2026-01-23 06:57:08.168704086 +0000 UTC m=+1427.972470739" observedRunningTime="2026-01-23 06:57:08.72625283 +0000 UTC m=+1428.530019493" watchObservedRunningTime="2026-01-23 06:57:08.728654326 +0000 UTC m=+1428.532420989" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.021397 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.516473 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-z6jhd"] Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.518504 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-z6jhd" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.521629 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.521635 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.550258 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-z6jhd"] Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.576853 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bf80e7-424f-41b4-8baf-393a440d8c1c-config-data\") pod \"nova-cell0-cell-mapping-z6jhd\" (UID: \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\") " pod="openstack/nova-cell0-cell-mapping-z6jhd" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.576909 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bf80e7-424f-41b4-8baf-393a440d8c1c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-z6jhd\" (UID: \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\") " pod="openstack/nova-cell0-cell-mapping-z6jhd" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.576956 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gnb9\" (UniqueName: \"kubernetes.io/projected/f2bf80e7-424f-41b4-8baf-393a440d8c1c-kube-api-access-2gnb9\") pod \"nova-cell0-cell-mapping-z6jhd\" (UID: \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\") " pod="openstack/nova-cell0-cell-mapping-z6jhd" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.577155 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2bf80e7-424f-41b4-8baf-393a440d8c1c-scripts\") pod \"nova-cell0-cell-mapping-z6jhd\" (UID: \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\") " pod="openstack/nova-cell0-cell-mapping-z6jhd" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.679358 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2bf80e7-424f-41b4-8baf-393a440d8c1c-scripts\") pod \"nova-cell0-cell-mapping-z6jhd\" (UID: \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\") " pod="openstack/nova-cell0-cell-mapping-z6jhd" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.679442 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bf80e7-424f-41b4-8baf-393a440d8c1c-config-data\") pod \"nova-cell0-cell-mapping-z6jhd\" (UID: \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\") " pod="openstack/nova-cell0-cell-mapping-z6jhd" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.679478 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bf80e7-424f-41b4-8baf-393a440d8c1c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-z6jhd\" (UID: \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\") " pod="openstack/nova-cell0-cell-mapping-z6jhd" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.679517 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gnb9\" (UniqueName: \"kubernetes.io/projected/f2bf80e7-424f-41b4-8baf-393a440d8c1c-kube-api-access-2gnb9\") pod \"nova-cell0-cell-mapping-z6jhd\" (UID: \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\") " pod="openstack/nova-cell0-cell-mapping-z6jhd" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.686520 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bf80e7-424f-41b4-8baf-393a440d8c1c-config-data\") pod \"nova-cell0-cell-mapping-z6jhd\" (UID: \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\") " pod="openstack/nova-cell0-cell-mapping-z6jhd" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.686933 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2bf80e7-424f-41b4-8baf-393a440d8c1c-scripts\") pod \"nova-cell0-cell-mapping-z6jhd\" (UID: \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\") " pod="openstack/nova-cell0-cell-mapping-z6jhd" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.689471 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bf80e7-424f-41b4-8baf-393a440d8c1c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-z6jhd\" (UID: \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\") " pod="openstack/nova-cell0-cell-mapping-z6jhd" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.734117 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gnb9\" (UniqueName: \"kubernetes.io/projected/f2bf80e7-424f-41b4-8baf-393a440d8c1c-kube-api-access-2gnb9\") pod \"nova-cell0-cell-mapping-z6jhd\" (UID: \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\") " pod="openstack/nova-cell0-cell-mapping-z6jhd" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.734180 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.736086 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.755017 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.776479 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.783478 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmhzg\" (UniqueName: \"kubernetes.io/projected/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-kube-api-access-fmhzg\") pod \"nova-api-0\" (UID: \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\") " pod="openstack/nova-api-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.783563 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\") " pod="openstack/nova-api-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.783670 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-config-data\") pod \"nova-api-0\" (UID: \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\") " pod="openstack/nova-api-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.783723 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-logs\") pod \"nova-api-0\" (UID: \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\") " pod="openstack/nova-api-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.838419 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.839957 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.842156 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-z6jhd" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.853685 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.868673 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.885316 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\") " pod="openstack/nova-api-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.885375 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f06343-b1d2-4394-8271-febc81587ec7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a4f06343-b1d2-4394-8271-febc81587ec7\") " pod="openstack/nova-scheduler-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.885430 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-config-data\") pod \"nova-api-0\" (UID: \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\") " pod="openstack/nova-api-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.885494 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-logs\") pod \"nova-api-0\" (UID: \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\") " pod="openstack/nova-api-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.885538 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmhzg\" (UniqueName: \"kubernetes.io/projected/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-kube-api-access-fmhzg\") pod \"nova-api-0\" (UID: \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\") " pod="openstack/nova-api-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.885559 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb72r\" (UniqueName: \"kubernetes.io/projected/a4f06343-b1d2-4394-8271-febc81587ec7-kube-api-access-cb72r\") pod \"nova-scheduler-0\" (UID: \"a4f06343-b1d2-4394-8271-febc81587ec7\") " pod="openstack/nova-scheduler-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.885607 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4f06343-b1d2-4394-8271-febc81587ec7-config-data\") pod \"nova-scheduler-0\" (UID: \"a4f06343-b1d2-4394-8271-febc81587ec7\") " pod="openstack/nova-scheduler-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.891121 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-logs\") pod \"nova-api-0\" (UID: \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\") " pod="openstack/nova-api-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.922790 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-config-data\") pod \"nova-api-0\" (UID: \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\") " pod="openstack/nova-api-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.923610 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\") " pod="openstack/nova-api-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.942990 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmhzg\" (UniqueName: \"kubernetes.io/projected/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-kube-api-access-fmhzg\") pod \"nova-api-0\" (UID: \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\") " pod="openstack/nova-api-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.989884 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f06343-b1d2-4394-8271-febc81587ec7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a4f06343-b1d2-4394-8271-febc81587ec7\") " pod="openstack/nova-scheduler-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.990033 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb72r\" (UniqueName: \"kubernetes.io/projected/a4f06343-b1d2-4394-8271-febc81587ec7-kube-api-access-cb72r\") pod \"nova-scheduler-0\" (UID: \"a4f06343-b1d2-4394-8271-febc81587ec7\") " pod="openstack/nova-scheduler-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.990068 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4f06343-b1d2-4394-8271-febc81587ec7-config-data\") pod \"nova-scheduler-0\" (UID: \"a4f06343-b1d2-4394-8271-febc81587ec7\") " pod="openstack/nova-scheduler-0" Jan 23 06:57:10 crc kubenswrapper[4937]: I0123 06:57:10.998501 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f06343-b1d2-4394-8271-febc81587ec7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a4f06343-b1d2-4394-8271-febc81587ec7\") " pod="openstack/nova-scheduler-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.006339 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4f06343-b1d2-4394-8271-febc81587ec7-config-data\") pod \"nova-scheduler-0\" (UID: \"a4f06343-b1d2-4394-8271-febc81587ec7\") " pod="openstack/nova-scheduler-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.040074 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb72r\" (UniqueName: \"kubernetes.io/projected/a4f06343-b1d2-4394-8271-febc81587ec7-kube-api-access-cb72r\") pod \"nova-scheduler-0\" (UID: \"a4f06343-b1d2-4394-8271-febc81587ec7\") " pod="openstack/nova-scheduler-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.072640 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.074289 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.085950 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.098278 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.125418 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.176478 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.206794 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67e860ba-8a5f-427b-ba19-21c252e60aef-logs\") pod \"nova-metadata-0\" (UID: \"67e860ba-8a5f-427b-ba19-21c252e60aef\") " pod="openstack/nova-metadata-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.223359 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j9gn\" (UniqueName: \"kubernetes.io/projected/67e860ba-8a5f-427b-ba19-21c252e60aef-kube-api-access-6j9gn\") pod \"nova-metadata-0\" (UID: \"67e860ba-8a5f-427b-ba19-21c252e60aef\") " pod="openstack/nova-metadata-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.223627 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e860ba-8a5f-427b-ba19-21c252e60aef-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67e860ba-8a5f-427b-ba19-21c252e60aef\") " pod="openstack/nova-metadata-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.223715 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67e860ba-8a5f-427b-ba19-21c252e60aef-config-data\") pod \"nova-metadata-0\" (UID: \"67e860ba-8a5f-427b-ba19-21c252e60aef\") " pod="openstack/nova-metadata-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.259004 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-745797d8cc-znx7z"] Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.261673 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.277622 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-745797d8cc-znx7z"] Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.317286 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.319150 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.321917 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.326042 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-dns-svc\") pod \"dnsmasq-dns-745797d8cc-znx7z\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.326097 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-config\") pod \"dnsmasq-dns-745797d8cc-znx7z\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.326120 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-ovsdbserver-nb\") pod \"dnsmasq-dns-745797d8cc-znx7z\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.326147 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpr79\" (UniqueName: \"kubernetes.io/projected/5302d833-0e59-4aeb-8699-b782840b9fee-kube-api-access-fpr79\") pod \"dnsmasq-dns-745797d8cc-znx7z\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.326182 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67e860ba-8a5f-427b-ba19-21c252e60aef-logs\") pod \"nova-metadata-0\" (UID: \"67e860ba-8a5f-427b-ba19-21c252e60aef\") " pod="openstack/nova-metadata-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.326219 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j9gn\" (UniqueName: \"kubernetes.io/projected/67e860ba-8a5f-427b-ba19-21c252e60aef-kube-api-access-6j9gn\") pod \"nova-metadata-0\" (UID: \"67e860ba-8a5f-427b-ba19-21c252e60aef\") " pod="openstack/nova-metadata-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.326266 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-ovsdbserver-sb\") pod \"dnsmasq-dns-745797d8cc-znx7z\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.326306 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e860ba-8a5f-427b-ba19-21c252e60aef-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67e860ba-8a5f-427b-ba19-21c252e60aef\") " pod="openstack/nova-metadata-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.326341 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67e860ba-8a5f-427b-ba19-21c252e60aef-config-data\") pod \"nova-metadata-0\" (UID: \"67e860ba-8a5f-427b-ba19-21c252e60aef\") " pod="openstack/nova-metadata-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.326365 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-dns-swift-storage-0\") pod \"dnsmasq-dns-745797d8cc-znx7z\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.326886 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67e860ba-8a5f-427b-ba19-21c252e60aef-logs\") pod \"nova-metadata-0\" (UID: \"67e860ba-8a5f-427b-ba19-21c252e60aef\") " pod="openstack/nova-metadata-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.330768 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e860ba-8a5f-427b-ba19-21c252e60aef-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67e860ba-8a5f-427b-ba19-21c252e60aef\") " pod="openstack/nova-metadata-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.337733 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.349319 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67e860ba-8a5f-427b-ba19-21c252e60aef-config-data\") pod \"nova-metadata-0\" (UID: \"67e860ba-8a5f-427b-ba19-21c252e60aef\") " pod="openstack/nova-metadata-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.360842 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j9gn\" (UniqueName: \"kubernetes.io/projected/67e860ba-8a5f-427b-ba19-21c252e60aef-kube-api-access-6j9gn\") pod \"nova-metadata-0\" (UID: \"67e860ba-8a5f-427b-ba19-21c252e60aef\") " pod="openstack/nova-metadata-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.420305 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.428953 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-ovsdbserver-sb\") pod \"dnsmasq-dns-745797d8cc-znx7z\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.429039 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c1a51b-b5cd-454a-bbb0-c284f595772f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1c1a51b-b5cd-454a-bbb0-c284f595772f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.429128 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-dns-swift-storage-0\") pod \"dnsmasq-dns-745797d8cc-znx7z\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.429164 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-dns-svc\") pod \"dnsmasq-dns-745797d8cc-znx7z\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.429212 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-config\") pod \"dnsmasq-dns-745797d8cc-znx7z\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.429243 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-ovsdbserver-nb\") pod \"dnsmasq-dns-745797d8cc-znx7z\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.429271 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpr79\" (UniqueName: \"kubernetes.io/projected/5302d833-0e59-4aeb-8699-b782840b9fee-kube-api-access-fpr79\") pod \"dnsmasq-dns-745797d8cc-znx7z\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.429323 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c1a51b-b5cd-454a-bbb0-c284f595772f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1c1a51b-b5cd-454a-bbb0-c284f595772f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.429354 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n64rl\" (UniqueName: \"kubernetes.io/projected/a1c1a51b-b5cd-454a-bbb0-c284f595772f-kube-api-access-n64rl\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1c1a51b-b5cd-454a-bbb0-c284f595772f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.430068 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-dns-swift-storage-0\") pod \"dnsmasq-dns-745797d8cc-znx7z\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.432442 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-ovsdbserver-nb\") pod \"dnsmasq-dns-745797d8cc-znx7z\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.433122 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-config\") pod \"dnsmasq-dns-745797d8cc-znx7z\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.437042 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-ovsdbserver-sb\") pod \"dnsmasq-dns-745797d8cc-znx7z\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.439491 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-dns-svc\") pod \"dnsmasq-dns-745797d8cc-znx7z\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.451896 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpr79\" (UniqueName: \"kubernetes.io/projected/5302d833-0e59-4aeb-8699-b782840b9fee-kube-api-access-fpr79\") pod \"dnsmasq-dns-745797d8cc-znx7z\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.534188 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c1a51b-b5cd-454a-bbb0-c284f595772f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1c1a51b-b5cd-454a-bbb0-c284f595772f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.534242 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n64rl\" (UniqueName: \"kubernetes.io/projected/a1c1a51b-b5cd-454a-bbb0-c284f595772f-kube-api-access-n64rl\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1c1a51b-b5cd-454a-bbb0-c284f595772f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.534312 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c1a51b-b5cd-454a-bbb0-c284f595772f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1c1a51b-b5cd-454a-bbb0-c284f595772f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.538220 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c1a51b-b5cd-454a-bbb0-c284f595772f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1c1a51b-b5cd-454a-bbb0-c284f595772f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.551140 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c1a51b-b5cd-454a-bbb0-c284f595772f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1c1a51b-b5cd-454a-bbb0-c284f595772f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.559459 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n64rl\" (UniqueName: \"kubernetes.io/projected/a1c1a51b-b5cd-454a-bbb0-c284f595772f-kube-api-access-n64rl\") pod \"nova-cell1-novncproxy-0\" (UID: \"a1c1a51b-b5cd-454a-bbb0-c284f595772f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.611073 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.662580 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.727043 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-z6jhd"] Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.787810 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-z6jhd" event={"ID":"f2bf80e7-424f-41b4-8baf-393a440d8c1c","Type":"ContainerStarted","Data":"2361bd8feb3b5a85fadbc25b23b40d9423b0c68c26ff7f9ff0e6b8f7ed511693"} Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.911312 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.952789 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.968425 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q4wrm"] Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.969746 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q4wrm" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.979390 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.979436 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 23 06:57:11 crc kubenswrapper[4937]: I0123 06:57:11.983449 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q4wrm"] Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.044985 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-scripts\") pod \"nova-cell1-conductor-db-sync-q4wrm\" (UID: \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\") " pod="openstack/nova-cell1-conductor-db-sync-q4wrm" Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.045258 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-q4wrm\" (UID: \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\") " pod="openstack/nova-cell1-conductor-db-sync-q4wrm" Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.045509 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrnx5\" (UniqueName: \"kubernetes.io/projected/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-kube-api-access-hrnx5\") pod \"nova-cell1-conductor-db-sync-q4wrm\" (UID: \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\") " pod="openstack/nova-cell1-conductor-db-sync-q4wrm" Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.045792 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-config-data\") pod \"nova-cell1-conductor-db-sync-q4wrm\" (UID: \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\") " pod="openstack/nova-cell1-conductor-db-sync-q4wrm" Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.151880 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-config-data\") pod \"nova-cell1-conductor-db-sync-q4wrm\" (UID: \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\") " pod="openstack/nova-cell1-conductor-db-sync-q4wrm" Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.151972 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-scripts\") pod \"nova-cell1-conductor-db-sync-q4wrm\" (UID: \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\") " pod="openstack/nova-cell1-conductor-db-sync-q4wrm" Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.152109 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-q4wrm\" (UID: \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\") " pod="openstack/nova-cell1-conductor-db-sync-q4wrm" Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.153350 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrnx5\" (UniqueName: \"kubernetes.io/projected/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-kube-api-access-hrnx5\") pod \"nova-cell1-conductor-db-sync-q4wrm\" (UID: \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\") " pod="openstack/nova-cell1-conductor-db-sync-q4wrm" Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.160410 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-scripts\") pod \"nova-cell1-conductor-db-sync-q4wrm\" (UID: \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\") " pod="openstack/nova-cell1-conductor-db-sync-q4wrm" Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.162476 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.164734 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-config-data\") pod \"nova-cell1-conductor-db-sync-q4wrm\" (UID: \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\") " pod="openstack/nova-cell1-conductor-db-sync-q4wrm" Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.172311 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-q4wrm\" (UID: \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\") " pod="openstack/nova-cell1-conductor-db-sync-q4wrm" Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.177833 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrnx5\" (UniqueName: \"kubernetes.io/projected/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-kube-api-access-hrnx5\") pod \"nova-cell1-conductor-db-sync-q4wrm\" (UID: \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\") " pod="openstack/nova-cell1-conductor-db-sync-q4wrm" Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.267559 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-745797d8cc-znx7z"] Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.306126 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q4wrm" Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.401470 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 06:57:12 crc kubenswrapper[4937]: W0123 06:57:12.413445 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1c1a51b_b5cd_454a_bbb0_c284f595772f.slice/crio-033adb405ac045210dd1bdfecf7e5bef33abb09d374c4a66c0f900111c581b44 WatchSource:0}: Error finding container 033adb405ac045210dd1bdfecf7e5bef33abb09d374c4a66c0f900111c581b44: Status 404 returned error can't find the container with id 033adb405ac045210dd1bdfecf7e5bef33abb09d374c4a66c0f900111c581b44 Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.803152 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67e860ba-8a5f-427b-ba19-21c252e60aef","Type":"ContainerStarted","Data":"50fe2dc1c4e4a0054d4f393cd4b43587b05331ae07eba6f518d4f28f7c6e2dd3"} Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.804845 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5c0b390-f123-48d4-98bd-fdb7791ac9ef","Type":"ContainerStarted","Data":"ea507686509e2437ef2622d69f6beab0cd78800c21b2ecd72987d45dbe78b16a"} Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.806729 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a4f06343-b1d2-4394-8271-febc81587ec7","Type":"ContainerStarted","Data":"e3f2dc8bd8a1bbb43f57093fffaa096e23a697ed488949c1e5ac6f841c840b3f"} Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.809928 4937 generic.go:334] "Generic (PLEG): container finished" podID="5302d833-0e59-4aeb-8699-b782840b9fee" containerID="c7d6743c67db057c807c1e90503606145fe29ef6c50674f76b03e3712d39578d" exitCode=0 Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.811357 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745797d8cc-znx7z" event={"ID":"5302d833-0e59-4aeb-8699-b782840b9fee","Type":"ContainerDied","Data":"c7d6743c67db057c807c1e90503606145fe29ef6c50674f76b03e3712d39578d"} Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.811396 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745797d8cc-znx7z" event={"ID":"5302d833-0e59-4aeb-8699-b782840b9fee","Type":"ContainerStarted","Data":"589fdb5b6349b2e14e1120e52fcc59b164347d7d9f39abec09e78c4490f37a69"} Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.814455 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-z6jhd" event={"ID":"f2bf80e7-424f-41b4-8baf-393a440d8c1c","Type":"ContainerStarted","Data":"a752c7dc8111ec23ecdd255f9de261340376389ed6e07fdfc7b3481f0a1f4b50"} Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.816187 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a1c1a51b-b5cd-454a-bbb0-c284f595772f","Type":"ContainerStarted","Data":"033adb405ac045210dd1bdfecf7e5bef33abb09d374c4a66c0f900111c581b44"} Jan 23 06:57:12 crc kubenswrapper[4937]: I0123 06:57:12.828658 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q4wrm"] Jan 23 06:57:13 crc kubenswrapper[4937]: I0123 06:57:13.829362 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q4wrm" event={"ID":"b7ea5f4b-dcae-4f04-b580-cc1e378d4073","Type":"ContainerStarted","Data":"32421920a89102b74239165d525fe669ce5d4dd8c1ba1c3f19cadef49c6eb9cc"} Jan 23 06:57:14 crc kubenswrapper[4937]: I0123 06:57:14.841976 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745797d8cc-znx7z" event={"ID":"5302d833-0e59-4aeb-8699-b782840b9fee","Type":"ContainerStarted","Data":"24e8a57ae780a8aa2432ead1c8f28af7b08c0c55994c7f284c95927050aab1e6"} Jan 23 06:57:14 crc kubenswrapper[4937]: I0123 06:57:14.842188 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:14 crc kubenswrapper[4937]: I0123 06:57:14.871692 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-z6jhd" podStartSLOduration=4.871670462 podStartE2EDuration="4.871670462s" podCreationTimestamp="2026-01-23 06:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:57:12.86627781 +0000 UTC m=+1432.670044463" watchObservedRunningTime="2026-01-23 06:57:14.871670462 +0000 UTC m=+1434.675437115" Jan 23 06:57:14 crc kubenswrapper[4937]: I0123 06:57:14.871793 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-745797d8cc-znx7z" podStartSLOduration=3.871789835 podStartE2EDuration="3.871789835s" podCreationTimestamp="2026-01-23 06:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:57:14.863889601 +0000 UTC m=+1434.667656264" watchObservedRunningTime="2026-01-23 06:57:14.871789835 +0000 UTC m=+1434.675556488" Jan 23 06:57:14 crc kubenswrapper[4937]: I0123 06:57:14.999165 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:57:15 crc kubenswrapper[4937]: I0123 06:57:15.011351 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 06:57:16 crc kubenswrapper[4937]: I0123 06:57:16.866274 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q4wrm" event={"ID":"b7ea5f4b-dcae-4f04-b580-cc1e378d4073","Type":"ContainerStarted","Data":"b07b50b15265827e42ca199fe4f384c4b67dd85bf80e0d1c609946c1d0b8d5f3"} Jan 23 06:57:16 crc kubenswrapper[4937]: I0123 06:57:16.879123 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a1c1a51b-b5cd-454a-bbb0-c284f595772f","Type":"ContainerStarted","Data":"48ebff1084c02f97a9ad01a19cf4684027ae41c9a7f995ad91aca9ee98bf1146"} Jan 23 06:57:16 crc kubenswrapper[4937]: I0123 06:57:16.879206 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a1c1a51b-b5cd-454a-bbb0-c284f595772f" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://48ebff1084c02f97a9ad01a19cf4684027ae41c9a7f995ad91aca9ee98bf1146" gracePeriod=30 Jan 23 06:57:16 crc kubenswrapper[4937]: I0123 06:57:16.883476 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67e860ba-8a5f-427b-ba19-21c252e60aef","Type":"ContainerStarted","Data":"e404540d08bba2909967d479b0a9f0021a7aabc2841968ee09b8b9585498c166"} Jan 23 06:57:16 crc kubenswrapper[4937]: I0123 06:57:16.883516 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67e860ba-8a5f-427b-ba19-21c252e60aef","Type":"ContainerStarted","Data":"677e1a3dd74f46e4d5960d5f3a0dfac994966e41ecc9c321af7a736e3b1d1fdf"} Jan 23 06:57:16 crc kubenswrapper[4937]: I0123 06:57:16.883637 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="67e860ba-8a5f-427b-ba19-21c252e60aef" containerName="nova-metadata-log" containerID="cri-o://677e1a3dd74f46e4d5960d5f3a0dfac994966e41ecc9c321af7a736e3b1d1fdf" gracePeriod=30 Jan 23 06:57:16 crc kubenswrapper[4937]: I0123 06:57:16.883729 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="67e860ba-8a5f-427b-ba19-21c252e60aef" containerName="nova-metadata-metadata" containerID="cri-o://e404540d08bba2909967d479b0a9f0021a7aabc2841968ee09b8b9585498c166" gracePeriod=30 Jan 23 06:57:16 crc kubenswrapper[4937]: I0123 06:57:16.898711 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5c0b390-f123-48d4-98bd-fdb7791ac9ef","Type":"ContainerStarted","Data":"397da000d4b84a8e12d246cd1b519695f77314a3777a8ac7986e9d5e809cf5ac"} Jan 23 06:57:16 crc kubenswrapper[4937]: I0123 06:57:16.898761 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5c0b390-f123-48d4-98bd-fdb7791ac9ef","Type":"ContainerStarted","Data":"8a833f47ff4843ab23647f8c5383ca2be8be1aef810f9e85e1ff0343ab207041"} Jan 23 06:57:16 crc kubenswrapper[4937]: I0123 06:57:16.902583 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-q4wrm" podStartSLOduration=5.902569425 podStartE2EDuration="5.902569425s" podCreationTimestamp="2026-01-23 06:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:57:16.896043438 +0000 UTC m=+1436.699810101" watchObservedRunningTime="2026-01-23 06:57:16.902569425 +0000 UTC m=+1436.706336078" Jan 23 06:57:16 crc kubenswrapper[4937]: I0123 06:57:16.906678 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a4f06343-b1d2-4394-8271-febc81587ec7","Type":"ContainerStarted","Data":"babe0c5078136a72d4bc9fb5fa6af8982430b26359482becdbf1366f0e5c8322"} Jan 23 06:57:16 crc kubenswrapper[4937]: I0123 06:57:16.941919 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.4969911 podStartE2EDuration="5.941891632s" podCreationTimestamp="2026-01-23 06:57:11 +0000 UTC" firstStartedPulling="2026-01-23 06:57:12.430020916 +0000 UTC m=+1432.233787569" lastFinishedPulling="2026-01-23 06:57:15.874921448 +0000 UTC m=+1435.678688101" observedRunningTime="2026-01-23 06:57:16.91785156 +0000 UTC m=+1436.721618213" watchObservedRunningTime="2026-01-23 06:57:16.941891632 +0000 UTC m=+1436.745658305" Jan 23 06:57:16 crc kubenswrapper[4937]: I0123 06:57:16.965131 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.255707415 podStartE2EDuration="6.965110452s" podCreationTimestamp="2026-01-23 06:57:10 +0000 UTC" firstStartedPulling="2026-01-23 06:57:12.16550119 +0000 UTC m=+1431.969267843" lastFinishedPulling="2026-01-23 06:57:15.874904227 +0000 UTC m=+1435.678670880" observedRunningTime="2026-01-23 06:57:16.949624992 +0000 UTC m=+1436.753391645" watchObservedRunningTime="2026-01-23 06:57:16.965110452 +0000 UTC m=+1436.768877105" Jan 23 06:57:16 crc kubenswrapper[4937]: I0123 06:57:16.991956 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.017607625 podStartE2EDuration="6.991912799s" podCreationTimestamp="2026-01-23 06:57:10 +0000 UTC" firstStartedPulling="2026-01-23 06:57:11.923760022 +0000 UTC m=+1431.727526675" lastFinishedPulling="2026-01-23 06:57:15.898065196 +0000 UTC m=+1435.701831849" observedRunningTime="2026-01-23 06:57:16.989279378 +0000 UTC m=+1436.793046031" watchObservedRunningTime="2026-01-23 06:57:16.991912799 +0000 UTC m=+1436.795679452" Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.018995 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.073122781 podStartE2EDuration="7.018972823s" podCreationTimestamp="2026-01-23 06:57:10 +0000 UTC" firstStartedPulling="2026-01-23 06:57:11.928176001 +0000 UTC m=+1431.731942654" lastFinishedPulling="2026-01-23 06:57:15.874026043 +0000 UTC m=+1435.677792696" observedRunningTime="2026-01-23 06:57:17.01293656 +0000 UTC m=+1436.816703223" watchObservedRunningTime="2026-01-23 06:57:17.018972823 +0000 UTC m=+1436.822739476" Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.596216 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.675747 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67e860ba-8a5f-427b-ba19-21c252e60aef-config-data\") pod \"67e860ba-8a5f-427b-ba19-21c252e60aef\" (UID: \"67e860ba-8a5f-427b-ba19-21c252e60aef\") " Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.675867 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e860ba-8a5f-427b-ba19-21c252e60aef-combined-ca-bundle\") pod \"67e860ba-8a5f-427b-ba19-21c252e60aef\" (UID: \"67e860ba-8a5f-427b-ba19-21c252e60aef\") " Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.675909 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j9gn\" (UniqueName: \"kubernetes.io/projected/67e860ba-8a5f-427b-ba19-21c252e60aef-kube-api-access-6j9gn\") pod \"67e860ba-8a5f-427b-ba19-21c252e60aef\" (UID: \"67e860ba-8a5f-427b-ba19-21c252e60aef\") " Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.676074 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67e860ba-8a5f-427b-ba19-21c252e60aef-logs\") pod \"67e860ba-8a5f-427b-ba19-21c252e60aef\" (UID: \"67e860ba-8a5f-427b-ba19-21c252e60aef\") " Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.676527 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67e860ba-8a5f-427b-ba19-21c252e60aef-logs" (OuterVolumeSpecName: "logs") pod "67e860ba-8a5f-427b-ba19-21c252e60aef" (UID: "67e860ba-8a5f-427b-ba19-21c252e60aef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.676982 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67e860ba-8a5f-427b-ba19-21c252e60aef-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.681278 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e860ba-8a5f-427b-ba19-21c252e60aef-kube-api-access-6j9gn" (OuterVolumeSpecName: "kube-api-access-6j9gn") pod "67e860ba-8a5f-427b-ba19-21c252e60aef" (UID: "67e860ba-8a5f-427b-ba19-21c252e60aef"). InnerVolumeSpecName "kube-api-access-6j9gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.708189 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67e860ba-8a5f-427b-ba19-21c252e60aef-config-data" (OuterVolumeSpecName: "config-data") pod "67e860ba-8a5f-427b-ba19-21c252e60aef" (UID: "67e860ba-8a5f-427b-ba19-21c252e60aef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.717670 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67e860ba-8a5f-427b-ba19-21c252e60aef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67e860ba-8a5f-427b-ba19-21c252e60aef" (UID: "67e860ba-8a5f-427b-ba19-21c252e60aef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.779014 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67e860ba-8a5f-427b-ba19-21c252e60aef-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.779047 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67e860ba-8a5f-427b-ba19-21c252e60aef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.779058 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j9gn\" (UniqueName: \"kubernetes.io/projected/67e860ba-8a5f-427b-ba19-21c252e60aef-kube-api-access-6j9gn\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.922766 4937 generic.go:334] "Generic (PLEG): container finished" podID="67e860ba-8a5f-427b-ba19-21c252e60aef" containerID="e404540d08bba2909967d479b0a9f0021a7aabc2841968ee09b8b9585498c166" exitCode=0 Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.922804 4937 generic.go:334] "Generic (PLEG): container finished" podID="67e860ba-8a5f-427b-ba19-21c252e60aef" containerID="677e1a3dd74f46e4d5960d5f3a0dfac994966e41ecc9c321af7a736e3b1d1fdf" exitCode=143 Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.922897 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.922924 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67e860ba-8a5f-427b-ba19-21c252e60aef","Type":"ContainerDied","Data":"e404540d08bba2909967d479b0a9f0021a7aabc2841968ee09b8b9585498c166"} Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.922965 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67e860ba-8a5f-427b-ba19-21c252e60aef","Type":"ContainerDied","Data":"677e1a3dd74f46e4d5960d5f3a0dfac994966e41ecc9c321af7a736e3b1d1fdf"} Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.922983 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67e860ba-8a5f-427b-ba19-21c252e60aef","Type":"ContainerDied","Data":"50fe2dc1c4e4a0054d4f393cd4b43587b05331ae07eba6f518d4f28f7c6e2dd3"} Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.923003 4937 scope.go:117] "RemoveContainer" containerID="e404540d08bba2909967d479b0a9f0021a7aabc2841968ee09b8b9585498c166" Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.967719 4937 scope.go:117] "RemoveContainer" containerID="677e1a3dd74f46e4d5960d5f3a0dfac994966e41ecc9c321af7a736e3b1d1fdf" Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.970165 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.982371 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.997995 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:57:17 crc kubenswrapper[4937]: E0123 06:57:17.998388 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e860ba-8a5f-427b-ba19-21c252e60aef" containerName="nova-metadata-log" Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.998404 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e860ba-8a5f-427b-ba19-21c252e60aef" containerName="nova-metadata-log" Jan 23 06:57:17 crc kubenswrapper[4937]: E0123 06:57:17.998435 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e860ba-8a5f-427b-ba19-21c252e60aef" containerName="nova-metadata-metadata" Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.998442 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e860ba-8a5f-427b-ba19-21c252e60aef" containerName="nova-metadata-metadata" Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.998642 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e860ba-8a5f-427b-ba19-21c252e60aef" containerName="nova-metadata-log" Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.998671 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e860ba-8a5f-427b-ba19-21c252e60aef" containerName="nova-metadata-metadata" Jan 23 06:57:17 crc kubenswrapper[4937]: I0123 06:57:17.999643 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.000191 4937 scope.go:117] "RemoveContainer" containerID="e404540d08bba2909967d479b0a9f0021a7aabc2841968ee09b8b9585498c166" Jan 23 06:57:18 crc kubenswrapper[4937]: E0123 06:57:18.002270 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e404540d08bba2909967d479b0a9f0021a7aabc2841968ee09b8b9585498c166\": container with ID starting with e404540d08bba2909967d479b0a9f0021a7aabc2841968ee09b8b9585498c166 not found: ID does not exist" containerID="e404540d08bba2909967d479b0a9f0021a7aabc2841968ee09b8b9585498c166" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.002315 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e404540d08bba2909967d479b0a9f0021a7aabc2841968ee09b8b9585498c166"} err="failed to get container status \"e404540d08bba2909967d479b0a9f0021a7aabc2841968ee09b8b9585498c166\": rpc error: code = NotFound desc = could not find container \"e404540d08bba2909967d479b0a9f0021a7aabc2841968ee09b8b9585498c166\": container with ID starting with e404540d08bba2909967d479b0a9f0021a7aabc2841968ee09b8b9585498c166 not found: ID does not exist" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.002345 4937 scope.go:117] "RemoveContainer" containerID="677e1a3dd74f46e4d5960d5f3a0dfac994966e41ecc9c321af7a736e3b1d1fdf" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.002500 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 06:57:18 crc kubenswrapper[4937]: E0123 06:57:18.002716 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"677e1a3dd74f46e4d5960d5f3a0dfac994966e41ecc9c321af7a736e3b1d1fdf\": container with ID starting with 677e1a3dd74f46e4d5960d5f3a0dfac994966e41ecc9c321af7a736e3b1d1fdf not found: ID does not exist" containerID="677e1a3dd74f46e4d5960d5f3a0dfac994966e41ecc9c321af7a736e3b1d1fdf" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.002742 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677e1a3dd74f46e4d5960d5f3a0dfac994966e41ecc9c321af7a736e3b1d1fdf"} err="failed to get container status \"677e1a3dd74f46e4d5960d5f3a0dfac994966e41ecc9c321af7a736e3b1d1fdf\": rpc error: code = NotFound desc = could not find container \"677e1a3dd74f46e4d5960d5f3a0dfac994966e41ecc9c321af7a736e3b1d1fdf\": container with ID starting with 677e1a3dd74f46e4d5960d5f3a0dfac994966e41ecc9c321af7a736e3b1d1fdf not found: ID does not exist" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.002758 4937 scope.go:117] "RemoveContainer" containerID="e404540d08bba2909967d479b0a9f0021a7aabc2841968ee09b8b9585498c166" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.002921 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.003088 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e404540d08bba2909967d479b0a9f0021a7aabc2841968ee09b8b9585498c166"} err="failed to get container status \"e404540d08bba2909967d479b0a9f0021a7aabc2841968ee09b8b9585498c166\": rpc error: code = NotFound desc = could not find container \"e404540d08bba2909967d479b0a9f0021a7aabc2841968ee09b8b9585498c166\": container with ID starting with e404540d08bba2909967d479b0a9f0021a7aabc2841968ee09b8b9585498c166 not found: ID does not exist" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.003114 4937 scope.go:117] "RemoveContainer" containerID="677e1a3dd74f46e4d5960d5f3a0dfac994966e41ecc9c321af7a736e3b1d1fdf" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.003682 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677e1a3dd74f46e4d5960d5f3a0dfac994966e41ecc9c321af7a736e3b1d1fdf"} err="failed to get container status \"677e1a3dd74f46e4d5960d5f3a0dfac994966e41ecc9c321af7a736e3b1d1fdf\": rpc error: code = NotFound desc = could not find container \"677e1a3dd74f46e4d5960d5f3a0dfac994966e41ecc9c321af7a736e3b1d1fdf\": container with ID starting with 677e1a3dd74f46e4d5960d5f3a0dfac994966e41ecc9c321af7a736e3b1d1fdf not found: ID does not exist" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.022883 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.084516 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e322b64-f409-4e9d-a3d8-0d29990614cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " pod="openstack/nova-metadata-0" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.084630 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e322b64-f409-4e9d-a3d8-0d29990614cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " pod="openstack/nova-metadata-0" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.084813 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flr4r\" (UniqueName: \"kubernetes.io/projected/7e322b64-f409-4e9d-a3d8-0d29990614cc-kube-api-access-flr4r\") pod \"nova-metadata-0\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " pod="openstack/nova-metadata-0" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.084864 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e322b64-f409-4e9d-a3d8-0d29990614cc-logs\") pod \"nova-metadata-0\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " pod="openstack/nova-metadata-0" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.085058 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e322b64-f409-4e9d-a3d8-0d29990614cc-config-data\") pod \"nova-metadata-0\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " pod="openstack/nova-metadata-0" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.187225 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e322b64-f409-4e9d-a3d8-0d29990614cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " pod="openstack/nova-metadata-0" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.187298 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e322b64-f409-4e9d-a3d8-0d29990614cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " pod="openstack/nova-metadata-0" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.187369 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flr4r\" (UniqueName: \"kubernetes.io/projected/7e322b64-f409-4e9d-a3d8-0d29990614cc-kube-api-access-flr4r\") pod \"nova-metadata-0\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " pod="openstack/nova-metadata-0" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.187389 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e322b64-f409-4e9d-a3d8-0d29990614cc-logs\") pod \"nova-metadata-0\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " pod="openstack/nova-metadata-0" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.187446 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e322b64-f409-4e9d-a3d8-0d29990614cc-config-data\") pod \"nova-metadata-0\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " pod="openstack/nova-metadata-0" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.188489 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e322b64-f409-4e9d-a3d8-0d29990614cc-logs\") pod \"nova-metadata-0\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " pod="openstack/nova-metadata-0" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.193024 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e322b64-f409-4e9d-a3d8-0d29990614cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " pod="openstack/nova-metadata-0" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.193372 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e322b64-f409-4e9d-a3d8-0d29990614cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " pod="openstack/nova-metadata-0" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.197194 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e322b64-f409-4e9d-a3d8-0d29990614cc-config-data\") pod \"nova-metadata-0\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " pod="openstack/nova-metadata-0" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.208181 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flr4r\" (UniqueName: \"kubernetes.io/projected/7e322b64-f409-4e9d-a3d8-0d29990614cc-kube-api-access-flr4r\") pod \"nova-metadata-0\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " pod="openstack/nova-metadata-0" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.325062 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.540646 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e860ba-8a5f-427b-ba19-21c252e60aef" path="/var/lib/kubelet/pods/67e860ba-8a5f-427b-ba19-21c252e60aef/volumes" Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.836255 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:57:18 crc kubenswrapper[4937]: I0123 06:57:18.934791 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e322b64-f409-4e9d-a3d8-0d29990614cc","Type":"ContainerStarted","Data":"afd4d1f0274da178f7e53343cc7f877b7a12009d1439d51e00da97365fbf93be"} Jan 23 06:57:19 crc kubenswrapper[4937]: I0123 06:57:19.952524 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e322b64-f409-4e9d-a3d8-0d29990614cc","Type":"ContainerStarted","Data":"1f315122d6c64ddaca25d05b64128c177dd3edd4bbda38e39895e9593bf2cfa7"} Jan 23 06:57:19 crc kubenswrapper[4937]: I0123 06:57:19.953136 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e322b64-f409-4e9d-a3d8-0d29990614cc","Type":"ContainerStarted","Data":"06048d7dc4da1963fd8edd3c249f75af6a8f8adb13658a0b9c6502650077ed7f"} Jan 23 06:57:19 crc kubenswrapper[4937]: I0123 06:57:19.988996 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.988967032 podStartE2EDuration="2.988967032s" podCreationTimestamp="2026-01-23 06:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:57:19.969803903 +0000 UTC m=+1439.773570596" watchObservedRunningTime="2026-01-23 06:57:19.988967032 +0000 UTC m=+1439.792733715" Jan 23 06:57:21 crc kubenswrapper[4937]: I0123 06:57:21.130649 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 06:57:21 crc kubenswrapper[4937]: I0123 06:57:21.130973 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 06:57:21 crc kubenswrapper[4937]: I0123 06:57:21.177238 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 06:57:21 crc kubenswrapper[4937]: I0123 06:57:21.177504 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 06:57:21 crc kubenswrapper[4937]: I0123 06:57:21.214566 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 06:57:21 crc kubenswrapper[4937]: I0123 06:57:21.611866 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:21 crc kubenswrapper[4937]: I0123 06:57:21.663409 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:21 crc kubenswrapper[4937]: I0123 06:57:21.698341 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65f85c5bcf-rjmhn"] Jan 23 06:57:21 crc kubenswrapper[4937]: I0123 06:57:21.698868 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" podUID="a12faab4-277a-44f5-becc-bd8f273ae701" containerName="dnsmasq-dns" containerID="cri-o://68a21ee4de62546f3101e109eb048356066415d95d0fb14d80a41106977ecf7e" gracePeriod=10 Jan 23 06:57:21 crc kubenswrapper[4937]: I0123 06:57:21.885615 4937 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" podUID="a12faab4-277a-44f5-becc-bd8f273ae701" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.189:5353: connect: connection refused" Jan 23 06:57:21 crc kubenswrapper[4937]: I0123 06:57:21.982503 4937 generic.go:334] "Generic (PLEG): container finished" podID="a12faab4-277a-44f5-becc-bd8f273ae701" containerID="68a21ee4de62546f3101e109eb048356066415d95d0fb14d80a41106977ecf7e" exitCode=0 Jan 23 06:57:21 crc kubenswrapper[4937]: I0123 06:57:21.982919 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" event={"ID":"a12faab4-277a-44f5-becc-bd8f273ae701","Type":"ContainerDied","Data":"68a21ee4de62546f3101e109eb048356066415d95d0fb14d80a41106977ecf7e"} Jan 23 06:57:21 crc kubenswrapper[4937]: I0123 06:57:21.986919 4937 generic.go:334] "Generic (PLEG): container finished" podID="f2bf80e7-424f-41b4-8baf-393a440d8c1c" containerID="a752c7dc8111ec23ecdd255f9de261340376389ed6e07fdfc7b3481f0a1f4b50" exitCode=0 Jan 23 06:57:21 crc kubenswrapper[4937]: I0123 06:57:21.986990 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-z6jhd" event={"ID":"f2bf80e7-424f-41b4-8baf-393a440d8c1c","Type":"ContainerDied","Data":"a752c7dc8111ec23ecdd255f9de261340376389ed6e07fdfc7b3481f0a1f4b50"} Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.021422 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.216260 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e5c0b390-f123-48d4-98bd-fdb7791ac9ef" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.210:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.216267 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e5c0b390-f123-48d4-98bd-fdb7791ac9ef" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.210:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.219791 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.289691 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgbxn\" (UniqueName: \"kubernetes.io/projected/a12faab4-277a-44f5-becc-bd8f273ae701-kube-api-access-dgbxn\") pod \"a12faab4-277a-44f5-becc-bd8f273ae701\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.289739 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-dns-swift-storage-0\") pod \"a12faab4-277a-44f5-becc-bd8f273ae701\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.289769 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-ovsdbserver-nb\") pod \"a12faab4-277a-44f5-becc-bd8f273ae701\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.289817 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-ovsdbserver-sb\") pod \"a12faab4-277a-44f5-becc-bd8f273ae701\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.289930 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-config\") pod \"a12faab4-277a-44f5-becc-bd8f273ae701\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.290001 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-dns-svc\") pod \"a12faab4-277a-44f5-becc-bd8f273ae701\" (UID: \"a12faab4-277a-44f5-becc-bd8f273ae701\") " Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.295793 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a12faab4-277a-44f5-becc-bd8f273ae701-kube-api-access-dgbxn" (OuterVolumeSpecName: "kube-api-access-dgbxn") pod "a12faab4-277a-44f5-becc-bd8f273ae701" (UID: "a12faab4-277a-44f5-becc-bd8f273ae701"). InnerVolumeSpecName "kube-api-access-dgbxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.359754 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a12faab4-277a-44f5-becc-bd8f273ae701" (UID: "a12faab4-277a-44f5-becc-bd8f273ae701"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.392434 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgbxn\" (UniqueName: \"kubernetes.io/projected/a12faab4-277a-44f5-becc-bd8f273ae701-kube-api-access-dgbxn\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.392469 4937 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.394302 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a12faab4-277a-44f5-becc-bd8f273ae701" (UID: "a12faab4-277a-44f5-becc-bd8f273ae701"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.397398 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a12faab4-277a-44f5-becc-bd8f273ae701" (UID: "a12faab4-277a-44f5-becc-bd8f273ae701"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.400084 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-config" (OuterVolumeSpecName: "config") pod "a12faab4-277a-44f5-becc-bd8f273ae701" (UID: "a12faab4-277a-44f5-becc-bd8f273ae701"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.408229 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a12faab4-277a-44f5-becc-bd8f273ae701" (UID: "a12faab4-277a-44f5-becc-bd8f273ae701"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.494677 4937 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.494718 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.494728 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:22 crc kubenswrapper[4937]: I0123 06:57:22.494739 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a12faab4-277a-44f5-becc-bd8f273ae701-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.000409 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.000580 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f85c5bcf-rjmhn" event={"ID":"a12faab4-277a-44f5-becc-bd8f273ae701","Type":"ContainerDied","Data":"30735a0c892151023eeb3d2a9e453376fff23246bbeec52ae6f79144391064bb"} Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.000742 4937 scope.go:117] "RemoveContainer" containerID="68a21ee4de62546f3101e109eb048356066415d95d0fb14d80a41106977ecf7e" Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.030021 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65f85c5bcf-rjmhn"] Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.038694 4937 scope.go:117] "RemoveContainer" containerID="90d98424e81038b18cee86ea7e18a34d0dcd560e7c31e54dc2fffa1fba399776" Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.042130 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65f85c5bcf-rjmhn"] Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.326773 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.326837 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.391688 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-z6jhd" Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.516048 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bf80e7-424f-41b4-8baf-393a440d8c1c-config-data\") pod \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\" (UID: \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\") " Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.516275 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2bf80e7-424f-41b4-8baf-393a440d8c1c-scripts\") pod \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\" (UID: \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\") " Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.516336 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gnb9\" (UniqueName: \"kubernetes.io/projected/f2bf80e7-424f-41b4-8baf-393a440d8c1c-kube-api-access-2gnb9\") pod \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\" (UID: \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\") " Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.516408 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bf80e7-424f-41b4-8baf-393a440d8c1c-combined-ca-bundle\") pod \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\" (UID: \"f2bf80e7-424f-41b4-8baf-393a440d8c1c\") " Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.521802 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bf80e7-424f-41b4-8baf-393a440d8c1c-scripts" (OuterVolumeSpecName: "scripts") pod "f2bf80e7-424f-41b4-8baf-393a440d8c1c" (UID: "f2bf80e7-424f-41b4-8baf-393a440d8c1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.523463 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2bf80e7-424f-41b4-8baf-393a440d8c1c-kube-api-access-2gnb9" (OuterVolumeSpecName: "kube-api-access-2gnb9") pod "f2bf80e7-424f-41b4-8baf-393a440d8c1c" (UID: "f2bf80e7-424f-41b4-8baf-393a440d8c1c"). InnerVolumeSpecName "kube-api-access-2gnb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.542934 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bf80e7-424f-41b4-8baf-393a440d8c1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2bf80e7-424f-41b4-8baf-393a440d8c1c" (UID: "f2bf80e7-424f-41b4-8baf-393a440d8c1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.546515 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bf80e7-424f-41b4-8baf-393a440d8c1c-config-data" (OuterVolumeSpecName: "config-data") pod "f2bf80e7-424f-41b4-8baf-393a440d8c1c" (UID: "f2bf80e7-424f-41b4-8baf-393a440d8c1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.618732 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2bf80e7-424f-41b4-8baf-393a440d8c1c-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.618778 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gnb9\" (UniqueName: \"kubernetes.io/projected/f2bf80e7-424f-41b4-8baf-393a440d8c1c-kube-api-access-2gnb9\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.618791 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bf80e7-424f-41b4-8baf-393a440d8c1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:23 crc kubenswrapper[4937]: I0123 06:57:23.618800 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bf80e7-424f-41b4-8baf-393a440d8c1c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.018738 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-z6jhd" Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.018756 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-z6jhd" event={"ID":"f2bf80e7-424f-41b4-8baf-393a440d8c1c","Type":"ContainerDied","Data":"2361bd8feb3b5a85fadbc25b23b40d9423b0c68c26ff7f9ff0e6b8f7ed511693"} Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.019877 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2361bd8feb3b5a85fadbc25b23b40d9423b0c68c26ff7f9ff0e6b8f7ed511693" Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.203768 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.204100 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e5c0b390-f123-48d4-98bd-fdb7791ac9ef" containerName="nova-api-log" containerID="cri-o://8a833f47ff4843ab23647f8c5383ca2be8be1aef810f9e85e1ff0343ab207041" gracePeriod=30 Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.204567 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e5c0b390-f123-48d4-98bd-fdb7791ac9ef" containerName="nova-api-api" containerID="cri-o://397da000d4b84a8e12d246cd1b519695f77314a3777a8ac7986e9d5e809cf5ac" gracePeriod=30 Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.241572 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.254507 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.254824 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e322b64-f409-4e9d-a3d8-0d29990614cc" containerName="nova-metadata-log" containerID="cri-o://06048d7dc4da1963fd8edd3c249f75af6a8f8adb13658a0b9c6502650077ed7f" gracePeriod=30 Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.255013 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e322b64-f409-4e9d-a3d8-0d29990614cc" containerName="nova-metadata-metadata" containerID="cri-o://1f315122d6c64ddaca25d05b64128c177dd3edd4bbda38e39895e9593bf2cfa7" gracePeriod=30 Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.537071 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a12faab4-277a-44f5-becc-bd8f273ae701" path="/var/lib/kubelet/pods/a12faab4-277a-44f5-becc-bd8f273ae701/volumes" Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.881850 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.940687 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e322b64-f409-4e9d-a3d8-0d29990614cc-config-data\") pod \"7e322b64-f409-4e9d-a3d8-0d29990614cc\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.940848 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flr4r\" (UniqueName: \"kubernetes.io/projected/7e322b64-f409-4e9d-a3d8-0d29990614cc-kube-api-access-flr4r\") pod \"7e322b64-f409-4e9d-a3d8-0d29990614cc\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.940896 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e322b64-f409-4e9d-a3d8-0d29990614cc-logs\") pod \"7e322b64-f409-4e9d-a3d8-0d29990614cc\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.940964 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e322b64-f409-4e9d-a3d8-0d29990614cc-nova-metadata-tls-certs\") pod \"7e322b64-f409-4e9d-a3d8-0d29990614cc\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.940989 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e322b64-f409-4e9d-a3d8-0d29990614cc-combined-ca-bundle\") pod \"7e322b64-f409-4e9d-a3d8-0d29990614cc\" (UID: \"7e322b64-f409-4e9d-a3d8-0d29990614cc\") " Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.941238 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e322b64-f409-4e9d-a3d8-0d29990614cc-logs" (OuterVolumeSpecName: "logs") pod "7e322b64-f409-4e9d-a3d8-0d29990614cc" (UID: "7e322b64-f409-4e9d-a3d8-0d29990614cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.941991 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e322b64-f409-4e9d-a3d8-0d29990614cc-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.948294 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e322b64-f409-4e9d-a3d8-0d29990614cc-kube-api-access-flr4r" (OuterVolumeSpecName: "kube-api-access-flr4r") pod "7e322b64-f409-4e9d-a3d8-0d29990614cc" (UID: "7e322b64-f409-4e9d-a3d8-0d29990614cc"). InnerVolumeSpecName "kube-api-access-flr4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.975451 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e322b64-f409-4e9d-a3d8-0d29990614cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e322b64-f409-4e9d-a3d8-0d29990614cc" (UID: "7e322b64-f409-4e9d-a3d8-0d29990614cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:24 crc kubenswrapper[4937]: I0123 06:57:24.975868 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e322b64-f409-4e9d-a3d8-0d29990614cc-config-data" (OuterVolumeSpecName: "config-data") pod "7e322b64-f409-4e9d-a3d8-0d29990614cc" (UID: "7e322b64-f409-4e9d-a3d8-0d29990614cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.008821 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e322b64-f409-4e9d-a3d8-0d29990614cc-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7e322b64-f409-4e9d-a3d8-0d29990614cc" (UID: "7e322b64-f409-4e9d-a3d8-0d29990614cc"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.031100 4937 generic.go:334] "Generic (PLEG): container finished" podID="7e322b64-f409-4e9d-a3d8-0d29990614cc" containerID="1f315122d6c64ddaca25d05b64128c177dd3edd4bbda38e39895e9593bf2cfa7" exitCode=0 Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.031134 4937 generic.go:334] "Generic (PLEG): container finished" podID="7e322b64-f409-4e9d-a3d8-0d29990614cc" containerID="06048d7dc4da1963fd8edd3c249f75af6a8f8adb13658a0b9c6502650077ed7f" exitCode=143 Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.031158 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.031210 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e322b64-f409-4e9d-a3d8-0d29990614cc","Type":"ContainerDied","Data":"1f315122d6c64ddaca25d05b64128c177dd3edd4bbda38e39895e9593bf2cfa7"} Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.031268 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e322b64-f409-4e9d-a3d8-0d29990614cc","Type":"ContainerDied","Data":"06048d7dc4da1963fd8edd3c249f75af6a8f8adb13658a0b9c6502650077ed7f"} Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.031282 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e322b64-f409-4e9d-a3d8-0d29990614cc","Type":"ContainerDied","Data":"afd4d1f0274da178f7e53343cc7f877b7a12009d1439d51e00da97365fbf93be"} Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.031301 4937 scope.go:117] "RemoveContainer" containerID="1f315122d6c64ddaca25d05b64128c177dd3edd4bbda38e39895e9593bf2cfa7" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.043828 4937 generic.go:334] "Generic (PLEG): container finished" podID="e5c0b390-f123-48d4-98bd-fdb7791ac9ef" containerID="8a833f47ff4843ab23647f8c5383ca2be8be1aef810f9e85e1ff0343ab207041" exitCode=143 Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.044030 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a4f06343-b1d2-4394-8271-febc81587ec7" containerName="nova-scheduler-scheduler" containerID="cri-o://babe0c5078136a72d4bc9fb5fa6af8982430b26359482becdbf1366f0e5c8322" gracePeriod=30 Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.044114 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5c0b390-f123-48d4-98bd-fdb7791ac9ef","Type":"ContainerDied","Data":"8a833f47ff4843ab23647f8c5383ca2be8be1aef810f9e85e1ff0343ab207041"} Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.045344 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e322b64-f409-4e9d-a3d8-0d29990614cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.045508 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flr4r\" (UniqueName: \"kubernetes.io/projected/7e322b64-f409-4e9d-a3d8-0d29990614cc-kube-api-access-flr4r\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.045641 4937 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e322b64-f409-4e9d-a3d8-0d29990614cc-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.045751 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e322b64-f409-4e9d-a3d8-0d29990614cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.113001 4937 scope.go:117] "RemoveContainer" containerID="06048d7dc4da1963fd8edd3c249f75af6a8f8adb13658a0b9c6502650077ed7f" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.120186 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.133578 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.147257 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:57:25 crc kubenswrapper[4937]: E0123 06:57:25.151073 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12faab4-277a-44f5-becc-bd8f273ae701" containerName="dnsmasq-dns" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.151132 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12faab4-277a-44f5-becc-bd8f273ae701" containerName="dnsmasq-dns" Jan 23 06:57:25 crc kubenswrapper[4937]: E0123 06:57:25.151150 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e322b64-f409-4e9d-a3d8-0d29990614cc" containerName="nova-metadata-log" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.151157 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e322b64-f409-4e9d-a3d8-0d29990614cc" containerName="nova-metadata-log" Jan 23 06:57:25 crc kubenswrapper[4937]: E0123 06:57:25.151186 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bf80e7-424f-41b4-8baf-393a440d8c1c" containerName="nova-manage" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.151194 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bf80e7-424f-41b4-8baf-393a440d8c1c" containerName="nova-manage" Jan 23 06:57:25 crc kubenswrapper[4937]: E0123 06:57:25.151206 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e322b64-f409-4e9d-a3d8-0d29990614cc" containerName="nova-metadata-metadata" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.151212 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e322b64-f409-4e9d-a3d8-0d29990614cc" containerName="nova-metadata-metadata" Jan 23 06:57:25 crc kubenswrapper[4937]: E0123 06:57:25.151222 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12faab4-277a-44f5-becc-bd8f273ae701" containerName="init" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.151229 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12faab4-277a-44f5-becc-bd8f273ae701" containerName="init" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.151503 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bf80e7-424f-41b4-8baf-393a440d8c1c" containerName="nova-manage" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.151515 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e322b64-f409-4e9d-a3d8-0d29990614cc" containerName="nova-metadata-log" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.151523 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e322b64-f409-4e9d-a3d8-0d29990614cc" containerName="nova-metadata-metadata" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.151535 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="a12faab4-277a-44f5-becc-bd8f273ae701" containerName="dnsmasq-dns" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.158115 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.160483 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.162429 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.176403 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.186288 4937 scope.go:117] "RemoveContainer" containerID="1f315122d6c64ddaca25d05b64128c177dd3edd4bbda38e39895e9593bf2cfa7" Jan 23 06:57:25 crc kubenswrapper[4937]: E0123 06:57:25.188751 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f315122d6c64ddaca25d05b64128c177dd3edd4bbda38e39895e9593bf2cfa7\": container with ID starting with 1f315122d6c64ddaca25d05b64128c177dd3edd4bbda38e39895e9593bf2cfa7 not found: ID does not exist" containerID="1f315122d6c64ddaca25d05b64128c177dd3edd4bbda38e39895e9593bf2cfa7" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.188794 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f315122d6c64ddaca25d05b64128c177dd3edd4bbda38e39895e9593bf2cfa7"} err="failed to get container status \"1f315122d6c64ddaca25d05b64128c177dd3edd4bbda38e39895e9593bf2cfa7\": rpc error: code = NotFound desc = could not find container \"1f315122d6c64ddaca25d05b64128c177dd3edd4bbda38e39895e9593bf2cfa7\": container with ID starting with 1f315122d6c64ddaca25d05b64128c177dd3edd4bbda38e39895e9593bf2cfa7 not found: ID does not exist" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.188816 4937 scope.go:117] "RemoveContainer" containerID="06048d7dc4da1963fd8edd3c249f75af6a8f8adb13658a0b9c6502650077ed7f" Jan 23 06:57:25 crc kubenswrapper[4937]: E0123 06:57:25.190455 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06048d7dc4da1963fd8edd3c249f75af6a8f8adb13658a0b9c6502650077ed7f\": container with ID starting with 06048d7dc4da1963fd8edd3c249f75af6a8f8adb13658a0b9c6502650077ed7f not found: ID does not exist" containerID="06048d7dc4da1963fd8edd3c249f75af6a8f8adb13658a0b9c6502650077ed7f" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.193663 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06048d7dc4da1963fd8edd3c249f75af6a8f8adb13658a0b9c6502650077ed7f"} err="failed to get container status \"06048d7dc4da1963fd8edd3c249f75af6a8f8adb13658a0b9c6502650077ed7f\": rpc error: code = NotFound desc = could not find container \"06048d7dc4da1963fd8edd3c249f75af6a8f8adb13658a0b9c6502650077ed7f\": container with ID starting with 06048d7dc4da1963fd8edd3c249f75af6a8f8adb13658a0b9c6502650077ed7f not found: ID does not exist" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.193717 4937 scope.go:117] "RemoveContainer" containerID="1f315122d6c64ddaca25d05b64128c177dd3edd4bbda38e39895e9593bf2cfa7" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.195929 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f315122d6c64ddaca25d05b64128c177dd3edd4bbda38e39895e9593bf2cfa7"} err="failed to get container status \"1f315122d6c64ddaca25d05b64128c177dd3edd4bbda38e39895e9593bf2cfa7\": rpc error: code = NotFound desc = could not find container \"1f315122d6c64ddaca25d05b64128c177dd3edd4bbda38e39895e9593bf2cfa7\": container with ID starting with 1f315122d6c64ddaca25d05b64128c177dd3edd4bbda38e39895e9593bf2cfa7 not found: ID does not exist" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.195956 4937 scope.go:117] "RemoveContainer" containerID="06048d7dc4da1963fd8edd3c249f75af6a8f8adb13658a0b9c6502650077ed7f" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.196671 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06048d7dc4da1963fd8edd3c249f75af6a8f8adb13658a0b9c6502650077ed7f"} err="failed to get container status \"06048d7dc4da1963fd8edd3c249f75af6a8f8adb13658a0b9c6502650077ed7f\": rpc error: code = NotFound desc = could not find container \"06048d7dc4da1963fd8edd3c249f75af6a8f8adb13658a0b9c6502650077ed7f\": container with ID starting with 06048d7dc4da1963fd8edd3c249f75af6a8f8adb13658a0b9c6502650077ed7f not found: ID does not exist" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.250286 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c99a5f22-8b8f-4e04-ac09-199c03bb1630-logs\") pod \"nova-metadata-0\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " pod="openstack/nova-metadata-0" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.250339 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99a5f22-8b8f-4e04-ac09-199c03bb1630-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " pod="openstack/nova-metadata-0" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.250382 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99a5f22-8b8f-4e04-ac09-199c03bb1630-config-data\") pod \"nova-metadata-0\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " pod="openstack/nova-metadata-0" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.250445 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv7ct\" (UniqueName: \"kubernetes.io/projected/c99a5f22-8b8f-4e04-ac09-199c03bb1630-kube-api-access-xv7ct\") pod \"nova-metadata-0\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " pod="openstack/nova-metadata-0" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.250478 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c99a5f22-8b8f-4e04-ac09-199c03bb1630-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " pod="openstack/nova-metadata-0" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.352535 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv7ct\" (UniqueName: \"kubernetes.io/projected/c99a5f22-8b8f-4e04-ac09-199c03bb1630-kube-api-access-xv7ct\") pod \"nova-metadata-0\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " pod="openstack/nova-metadata-0" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.352639 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c99a5f22-8b8f-4e04-ac09-199c03bb1630-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " pod="openstack/nova-metadata-0" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.352799 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c99a5f22-8b8f-4e04-ac09-199c03bb1630-logs\") pod \"nova-metadata-0\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " pod="openstack/nova-metadata-0" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.352842 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99a5f22-8b8f-4e04-ac09-199c03bb1630-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " pod="openstack/nova-metadata-0" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.352900 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99a5f22-8b8f-4e04-ac09-199c03bb1630-config-data\") pod \"nova-metadata-0\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " pod="openstack/nova-metadata-0" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.353275 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c99a5f22-8b8f-4e04-ac09-199c03bb1630-logs\") pod \"nova-metadata-0\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " pod="openstack/nova-metadata-0" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.355848 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c99a5f22-8b8f-4e04-ac09-199c03bb1630-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " pod="openstack/nova-metadata-0" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.356518 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99a5f22-8b8f-4e04-ac09-199c03bb1630-config-data\") pod \"nova-metadata-0\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " pod="openstack/nova-metadata-0" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.366859 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99a5f22-8b8f-4e04-ac09-199c03bb1630-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " pod="openstack/nova-metadata-0" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.368495 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv7ct\" (UniqueName: \"kubernetes.io/projected/c99a5f22-8b8f-4e04-ac09-199c03bb1630-kube-api-access-xv7ct\") pod \"nova-metadata-0\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " pod="openstack/nova-metadata-0" Jan 23 06:57:25 crc kubenswrapper[4937]: I0123 06:57:25.486771 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:25.732797 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:25.861792 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-combined-ca-bundle\") pod \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\" (UID: \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\") " Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:25.862151 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-config-data\") pod \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\" (UID: \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\") " Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:25.862294 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-logs\") pod \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\" (UID: \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\") " Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:25.862352 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmhzg\" (UniqueName: \"kubernetes.io/projected/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-kube-api-access-fmhzg\") pod \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\" (UID: \"e5c0b390-f123-48d4-98bd-fdb7791ac9ef\") " Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:25.863236 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-logs" (OuterVolumeSpecName: "logs") pod "e5c0b390-f123-48d4-98bd-fdb7791ac9ef" (UID: "e5c0b390-f123-48d4-98bd-fdb7791ac9ef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:25.866684 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-kube-api-access-fmhzg" (OuterVolumeSpecName: "kube-api-access-fmhzg") pod "e5c0b390-f123-48d4-98bd-fdb7791ac9ef" (UID: "e5c0b390-f123-48d4-98bd-fdb7791ac9ef"). InnerVolumeSpecName "kube-api-access-fmhzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:25.897674 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-config-data" (OuterVolumeSpecName: "config-data") pod "e5c0b390-f123-48d4-98bd-fdb7791ac9ef" (UID: "e5c0b390-f123-48d4-98bd-fdb7791ac9ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:25.902927 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5c0b390-f123-48d4-98bd-fdb7791ac9ef" (UID: "e5c0b390-f123-48d4-98bd-fdb7791ac9ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:25.965721 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:25.965745 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmhzg\" (UniqueName: \"kubernetes.io/projected/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-kube-api-access-fmhzg\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:25.965759 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:25.965770 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c0b390-f123-48d4-98bd-fdb7791ac9ef-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:25.973754 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.054437 4937 generic.go:334] "Generic (PLEG): container finished" podID="b7ea5f4b-dcae-4f04-b580-cc1e378d4073" containerID="b07b50b15265827e42ca199fe4f384c4b67dd85bf80e0d1c609946c1d0b8d5f3" exitCode=0 Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.054525 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q4wrm" event={"ID":"b7ea5f4b-dcae-4f04-b580-cc1e378d4073","Type":"ContainerDied","Data":"b07b50b15265827e42ca199fe4f384c4b67dd85bf80e0d1c609946c1d0b8d5f3"} Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.056675 4937 generic.go:334] "Generic (PLEG): container finished" podID="e5c0b390-f123-48d4-98bd-fdb7791ac9ef" containerID="397da000d4b84a8e12d246cd1b519695f77314a3777a8ac7986e9d5e809cf5ac" exitCode=0 Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.056729 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.056814 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5c0b390-f123-48d4-98bd-fdb7791ac9ef","Type":"ContainerDied","Data":"397da000d4b84a8e12d246cd1b519695f77314a3777a8ac7986e9d5e809cf5ac"} Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.056858 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5c0b390-f123-48d4-98bd-fdb7791ac9ef","Type":"ContainerDied","Data":"ea507686509e2437ef2622d69f6beab0cd78800c21b2ecd72987d45dbe78b16a"} Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.056877 4937 scope.go:117] "RemoveContainer" containerID="397da000d4b84a8e12d246cd1b519695f77314a3777a8ac7986e9d5e809cf5ac" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.058478 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c99a5f22-8b8f-4e04-ac09-199c03bb1630","Type":"ContainerStarted","Data":"c2fa04332e86b32bd5889b029ad4b008e031c07e68994690529919bf6b0392dd"} Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.096096 4937 scope.go:117] "RemoveContainer" containerID="8a833f47ff4843ab23647f8c5383ca2be8be1aef810f9e85e1ff0343ab207041" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.138037 4937 scope.go:117] "RemoveContainer" containerID="397da000d4b84a8e12d246cd1b519695f77314a3777a8ac7986e9d5e809cf5ac" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.139229 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:57:26 crc kubenswrapper[4937]: E0123 06:57:26.140817 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"397da000d4b84a8e12d246cd1b519695f77314a3777a8ac7986e9d5e809cf5ac\": container with ID starting with 397da000d4b84a8e12d246cd1b519695f77314a3777a8ac7986e9d5e809cf5ac not found: ID does not exist" containerID="397da000d4b84a8e12d246cd1b519695f77314a3777a8ac7986e9d5e809cf5ac" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.140843 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"397da000d4b84a8e12d246cd1b519695f77314a3777a8ac7986e9d5e809cf5ac"} err="failed to get container status \"397da000d4b84a8e12d246cd1b519695f77314a3777a8ac7986e9d5e809cf5ac\": rpc error: code = NotFound desc = could not find container \"397da000d4b84a8e12d246cd1b519695f77314a3777a8ac7986e9d5e809cf5ac\": container with ID starting with 397da000d4b84a8e12d246cd1b519695f77314a3777a8ac7986e9d5e809cf5ac not found: ID does not exist" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.140862 4937 scope.go:117] "RemoveContainer" containerID="8a833f47ff4843ab23647f8c5383ca2be8be1aef810f9e85e1ff0343ab207041" Jan 23 06:57:26 crc kubenswrapper[4937]: E0123 06:57:26.142818 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a833f47ff4843ab23647f8c5383ca2be8be1aef810f9e85e1ff0343ab207041\": container with ID starting with 8a833f47ff4843ab23647f8c5383ca2be8be1aef810f9e85e1ff0343ab207041 not found: ID does not exist" containerID="8a833f47ff4843ab23647f8c5383ca2be8be1aef810f9e85e1ff0343ab207041" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.142848 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a833f47ff4843ab23647f8c5383ca2be8be1aef810f9e85e1ff0343ab207041"} err="failed to get container status \"8a833f47ff4843ab23647f8c5383ca2be8be1aef810f9e85e1ff0343ab207041\": rpc error: code = NotFound desc = could not find container \"8a833f47ff4843ab23647f8c5383ca2be8be1aef810f9e85e1ff0343ab207041\": container with ID starting with 8a833f47ff4843ab23647f8c5383ca2be8be1aef810f9e85e1ff0343ab207041 not found: ID does not exist" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.168020 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:57:26 crc kubenswrapper[4937]: E0123 06:57:26.181128 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="babe0c5078136a72d4bc9fb5fa6af8982430b26359482becdbf1366f0e5c8322" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.182576 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 06:57:26 crc kubenswrapper[4937]: E0123 06:57:26.183037 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c0b390-f123-48d4-98bd-fdb7791ac9ef" containerName="nova-api-api" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.183054 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c0b390-f123-48d4-98bd-fdb7791ac9ef" containerName="nova-api-api" Jan 23 06:57:26 crc kubenswrapper[4937]: E0123 06:57:26.183079 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c0b390-f123-48d4-98bd-fdb7791ac9ef" containerName="nova-api-log" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.183086 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c0b390-f123-48d4-98bd-fdb7791ac9ef" containerName="nova-api-log" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.183323 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5c0b390-f123-48d4-98bd-fdb7791ac9ef" containerName="nova-api-log" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.183352 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5c0b390-f123-48d4-98bd-fdb7791ac9ef" containerName="nova-api-api" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.184587 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.187406 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 06:57:26 crc kubenswrapper[4937]: E0123 06:57:26.187725 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="babe0c5078136a72d4bc9fb5fa6af8982430b26359482becdbf1366f0e5c8322" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 06:57:26 crc kubenswrapper[4937]: E0123 06:57:26.189141 4937 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="babe0c5078136a72d4bc9fb5fa6af8982430b26359482becdbf1366f0e5c8322" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 06:57:26 crc kubenswrapper[4937]: E0123 06:57:26.189191 4937 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a4f06343-b1d2-4394-8271-febc81587ec7" containerName="nova-scheduler-scheduler" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.201223 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.271392 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hfqk\" (UniqueName: \"kubernetes.io/projected/d2911e95-df39-40f3-932c-e2dcbe84491a-kube-api-access-2hfqk\") pod \"nova-api-0\" (UID: \"d2911e95-df39-40f3-932c-e2dcbe84491a\") " pod="openstack/nova-api-0" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.271491 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2911e95-df39-40f3-932c-e2dcbe84491a-config-data\") pod \"nova-api-0\" (UID: \"d2911e95-df39-40f3-932c-e2dcbe84491a\") " pod="openstack/nova-api-0" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.271513 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2911e95-df39-40f3-932c-e2dcbe84491a-logs\") pod \"nova-api-0\" (UID: \"d2911e95-df39-40f3-932c-e2dcbe84491a\") " pod="openstack/nova-api-0" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.271695 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2911e95-df39-40f3-932c-e2dcbe84491a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d2911e95-df39-40f3-932c-e2dcbe84491a\") " pod="openstack/nova-api-0" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.373001 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hfqk\" (UniqueName: \"kubernetes.io/projected/d2911e95-df39-40f3-932c-e2dcbe84491a-kube-api-access-2hfqk\") pod \"nova-api-0\" (UID: \"d2911e95-df39-40f3-932c-e2dcbe84491a\") " pod="openstack/nova-api-0" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.373296 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2911e95-df39-40f3-932c-e2dcbe84491a-logs\") pod \"nova-api-0\" (UID: \"d2911e95-df39-40f3-932c-e2dcbe84491a\") " pod="openstack/nova-api-0" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.373317 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2911e95-df39-40f3-932c-e2dcbe84491a-config-data\") pod \"nova-api-0\" (UID: \"d2911e95-df39-40f3-932c-e2dcbe84491a\") " pod="openstack/nova-api-0" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.373416 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2911e95-df39-40f3-932c-e2dcbe84491a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d2911e95-df39-40f3-932c-e2dcbe84491a\") " pod="openstack/nova-api-0" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.373887 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2911e95-df39-40f3-932c-e2dcbe84491a-logs\") pod \"nova-api-0\" (UID: \"d2911e95-df39-40f3-932c-e2dcbe84491a\") " pod="openstack/nova-api-0" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.377483 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2911e95-df39-40f3-932c-e2dcbe84491a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d2911e95-df39-40f3-932c-e2dcbe84491a\") " pod="openstack/nova-api-0" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.377796 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2911e95-df39-40f3-932c-e2dcbe84491a-config-data\") pod \"nova-api-0\" (UID: \"d2911e95-df39-40f3-932c-e2dcbe84491a\") " pod="openstack/nova-api-0" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.392410 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hfqk\" (UniqueName: \"kubernetes.io/projected/d2911e95-df39-40f3-932c-e2dcbe84491a-kube-api-access-2hfqk\") pod \"nova-api-0\" (UID: \"d2911e95-df39-40f3-932c-e2dcbe84491a\") " pod="openstack/nova-api-0" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.502125 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.542396 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e322b64-f409-4e9d-a3d8-0d29990614cc" path="/var/lib/kubelet/pods/7e322b64-f409-4e9d-a3d8-0d29990614cc/volumes" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.543718 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5c0b390-f123-48d4-98bd-fdb7791ac9ef" path="/var/lib/kubelet/pods/e5c0b390-f123-48d4-98bd-fdb7791ac9ef/volumes" Jan 23 06:57:26 crc kubenswrapper[4937]: I0123 06:57:26.939837 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:57:27 crc kubenswrapper[4937]: I0123 06:57:27.070254 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c99a5f22-8b8f-4e04-ac09-199c03bb1630","Type":"ContainerStarted","Data":"c174004bff251ef07a9ef6390a2acb4ab56d80f4358c0749fd7d7ba689a3cc3c"} Jan 23 06:57:27 crc kubenswrapper[4937]: I0123 06:57:27.070294 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c99a5f22-8b8f-4e04-ac09-199c03bb1630","Type":"ContainerStarted","Data":"c7ec3504f51eb5c71ce7d58021b8405dc34af35d89065f0b0ae6ae7538d38019"} Jan 23 06:57:27 crc kubenswrapper[4937]: I0123 06:57:27.071797 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d2911e95-df39-40f3-932c-e2dcbe84491a","Type":"ContainerStarted","Data":"e31ca2a87b6843c2a20fe3d6af08be10afaf8bb64c90439fd6400fb6053cff66"} Jan 23 06:57:27 crc kubenswrapper[4937]: I0123 06:57:27.094064 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.094043126 podStartE2EDuration="2.094043126s" podCreationTimestamp="2026-01-23 06:57:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:57:27.087951121 +0000 UTC m=+1446.891717774" watchObservedRunningTime="2026-01-23 06:57:27.094043126 +0000 UTC m=+1446.897809779" Jan 23 06:57:27 crc kubenswrapper[4937]: I0123 06:57:27.383550 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q4wrm" Jan 23 06:57:27 crc kubenswrapper[4937]: I0123 06:57:27.501127 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrnx5\" (UniqueName: \"kubernetes.io/projected/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-kube-api-access-hrnx5\") pod \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\" (UID: \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\") " Jan 23 06:57:27 crc kubenswrapper[4937]: I0123 06:57:27.501201 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-combined-ca-bundle\") pod \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\" (UID: \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\") " Jan 23 06:57:27 crc kubenswrapper[4937]: I0123 06:57:27.501236 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-scripts\") pod \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\" (UID: \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\") " Jan 23 06:57:27 crc kubenswrapper[4937]: I0123 06:57:27.501402 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-config-data\") pod \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\" (UID: \"b7ea5f4b-dcae-4f04-b580-cc1e378d4073\") " Jan 23 06:57:27 crc kubenswrapper[4937]: I0123 06:57:27.506999 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-scripts" (OuterVolumeSpecName: "scripts") pod "b7ea5f4b-dcae-4f04-b580-cc1e378d4073" (UID: "b7ea5f4b-dcae-4f04-b580-cc1e378d4073"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:27 crc kubenswrapper[4937]: I0123 06:57:27.509858 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-kube-api-access-hrnx5" (OuterVolumeSpecName: "kube-api-access-hrnx5") pod "b7ea5f4b-dcae-4f04-b580-cc1e378d4073" (UID: "b7ea5f4b-dcae-4f04-b580-cc1e378d4073"). InnerVolumeSpecName "kube-api-access-hrnx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:57:27 crc kubenswrapper[4937]: I0123 06:57:27.533507 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-config-data" (OuterVolumeSpecName: "config-data") pod "b7ea5f4b-dcae-4f04-b580-cc1e378d4073" (UID: "b7ea5f4b-dcae-4f04-b580-cc1e378d4073"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:27 crc kubenswrapper[4937]: I0123 06:57:27.541088 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7ea5f4b-dcae-4f04-b580-cc1e378d4073" (UID: "b7ea5f4b-dcae-4f04-b580-cc1e378d4073"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:27 crc kubenswrapper[4937]: I0123 06:57:27.603871 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrnx5\" (UniqueName: \"kubernetes.io/projected/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-kube-api-access-hrnx5\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:27 crc kubenswrapper[4937]: I0123 06:57:27.604246 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:27 crc kubenswrapper[4937]: I0123 06:57:27.604312 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:27 crc kubenswrapper[4937]: I0123 06:57:27.604337 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7ea5f4b-dcae-4f04-b580-cc1e378d4073-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.090878 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d2911e95-df39-40f3-932c-e2dcbe84491a","Type":"ContainerStarted","Data":"3eb1826dcf9588d3425ffdfdb7c83514bc17a6870ac6a4c206fb5768560409a2"} Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.090918 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d2911e95-df39-40f3-932c-e2dcbe84491a","Type":"ContainerStarted","Data":"1b0b486f3c41da6be83c8fdcbb62f9085fa7d1129a384ca4169e2da214c2a2df"} Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.094745 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q4wrm" event={"ID":"b7ea5f4b-dcae-4f04-b580-cc1e378d4073","Type":"ContainerDied","Data":"32421920a89102b74239165d525fe669ce5d4dd8c1ba1c3f19cadef49c6eb9cc"} Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.094791 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32421920a89102b74239165d525fe669ce5d4dd8c1ba1c3f19cadef49c6eb9cc" Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.094753 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q4wrm" Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.126814 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.126796642 podStartE2EDuration="2.126796642s" podCreationTimestamp="2026-01-23 06:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:57:28.12155683 +0000 UTC m=+1447.925323493" watchObservedRunningTime="2026-01-23 06:57:28.126796642 +0000 UTC m=+1447.930563295" Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.154027 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 06:57:28 crc kubenswrapper[4937]: E0123 06:57:28.154560 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7ea5f4b-dcae-4f04-b580-cc1e378d4073" containerName="nova-cell1-conductor-db-sync" Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.154581 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7ea5f4b-dcae-4f04-b580-cc1e378d4073" containerName="nova-cell1-conductor-db-sync" Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.154869 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7ea5f4b-dcae-4f04-b580-cc1e378d4073" containerName="nova-cell1-conductor-db-sync" Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.155658 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.157418 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.165316 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.215496 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/668f3ab2-5d61-4919-afa3-356a1a061499-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"668f3ab2-5d61-4919-afa3-356a1a061499\") " pod="openstack/nova-cell1-conductor-0" Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.216078 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfsmf\" (UniqueName: \"kubernetes.io/projected/668f3ab2-5d61-4919-afa3-356a1a061499-kube-api-access-xfsmf\") pod \"nova-cell1-conductor-0\" (UID: \"668f3ab2-5d61-4919-afa3-356a1a061499\") " pod="openstack/nova-cell1-conductor-0" Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.216194 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/668f3ab2-5d61-4919-afa3-356a1a061499-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"668f3ab2-5d61-4919-afa3-356a1a061499\") " pod="openstack/nova-cell1-conductor-0" Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.317893 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfsmf\" (UniqueName: \"kubernetes.io/projected/668f3ab2-5d61-4919-afa3-356a1a061499-kube-api-access-xfsmf\") pod \"nova-cell1-conductor-0\" (UID: \"668f3ab2-5d61-4919-afa3-356a1a061499\") " pod="openstack/nova-cell1-conductor-0" Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.317955 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/668f3ab2-5d61-4919-afa3-356a1a061499-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"668f3ab2-5d61-4919-afa3-356a1a061499\") " pod="openstack/nova-cell1-conductor-0" Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.318647 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/668f3ab2-5d61-4919-afa3-356a1a061499-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"668f3ab2-5d61-4919-afa3-356a1a061499\") " pod="openstack/nova-cell1-conductor-0" Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.322080 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/668f3ab2-5d61-4919-afa3-356a1a061499-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"668f3ab2-5d61-4919-afa3-356a1a061499\") " pod="openstack/nova-cell1-conductor-0" Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.322090 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/668f3ab2-5d61-4919-afa3-356a1a061499-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"668f3ab2-5d61-4919-afa3-356a1a061499\") " pod="openstack/nova-cell1-conductor-0" Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.333801 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfsmf\" (UniqueName: \"kubernetes.io/projected/668f3ab2-5d61-4919-afa3-356a1a061499-kube-api-access-xfsmf\") pod \"nova-cell1-conductor-0\" (UID: \"668f3ab2-5d61-4919-afa3-356a1a061499\") " pod="openstack/nova-cell1-conductor-0" Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.490316 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 06:57:28 crc kubenswrapper[4937]: I0123 06:57:28.973329 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 06:57:29 crc kubenswrapper[4937]: I0123 06:57:29.107430 4937 generic.go:334] "Generic (PLEG): container finished" podID="a4f06343-b1d2-4394-8271-febc81587ec7" containerID="babe0c5078136a72d4bc9fb5fa6af8982430b26359482becdbf1366f0e5c8322" exitCode=0 Jan 23 06:57:29 crc kubenswrapper[4937]: I0123 06:57:29.107577 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a4f06343-b1d2-4394-8271-febc81587ec7","Type":"ContainerDied","Data":"babe0c5078136a72d4bc9fb5fa6af8982430b26359482becdbf1366f0e5c8322"} Jan 23 06:57:29 crc kubenswrapper[4937]: I0123 06:57:29.107647 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a4f06343-b1d2-4394-8271-febc81587ec7","Type":"ContainerDied","Data":"e3f2dc8bd8a1bbb43f57093fffaa096e23a697ed488949c1e5ac6f841c840b3f"} Jan 23 06:57:29 crc kubenswrapper[4937]: I0123 06:57:29.107662 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3f2dc8bd8a1bbb43f57093fffaa096e23a697ed488949c1e5ac6f841c840b3f" Jan 23 06:57:29 crc kubenswrapper[4937]: I0123 06:57:29.114793 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"668f3ab2-5d61-4919-afa3-356a1a061499","Type":"ContainerStarted","Data":"a9b77aefadcbc3531eda5fc8ee869455f1d626a4cccd3d8b6cbcbfd84955a55c"} Jan 23 06:57:29 crc kubenswrapper[4937]: I0123 06:57:29.153578 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:57:29 crc kubenswrapper[4937]: I0123 06:57:29.239646 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f06343-b1d2-4394-8271-febc81587ec7-combined-ca-bundle\") pod \"a4f06343-b1d2-4394-8271-febc81587ec7\" (UID: \"a4f06343-b1d2-4394-8271-febc81587ec7\") " Jan 23 06:57:29 crc kubenswrapper[4937]: I0123 06:57:29.239961 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb72r\" (UniqueName: \"kubernetes.io/projected/a4f06343-b1d2-4394-8271-febc81587ec7-kube-api-access-cb72r\") pod \"a4f06343-b1d2-4394-8271-febc81587ec7\" (UID: \"a4f06343-b1d2-4394-8271-febc81587ec7\") " Jan 23 06:57:29 crc kubenswrapper[4937]: I0123 06:57:29.240313 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4f06343-b1d2-4394-8271-febc81587ec7-config-data\") pod \"a4f06343-b1d2-4394-8271-febc81587ec7\" (UID: \"a4f06343-b1d2-4394-8271-febc81587ec7\") " Jan 23 06:57:29 crc kubenswrapper[4937]: I0123 06:57:29.248068 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f06343-b1d2-4394-8271-febc81587ec7-kube-api-access-cb72r" (OuterVolumeSpecName: "kube-api-access-cb72r") pod "a4f06343-b1d2-4394-8271-febc81587ec7" (UID: "a4f06343-b1d2-4394-8271-febc81587ec7"). InnerVolumeSpecName "kube-api-access-cb72r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:57:29 crc kubenswrapper[4937]: I0123 06:57:29.276309 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f06343-b1d2-4394-8271-febc81587ec7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4f06343-b1d2-4394-8271-febc81587ec7" (UID: "a4f06343-b1d2-4394-8271-febc81587ec7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:29 crc kubenswrapper[4937]: I0123 06:57:29.281533 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f06343-b1d2-4394-8271-febc81587ec7-config-data" (OuterVolumeSpecName: "config-data") pod "a4f06343-b1d2-4394-8271-febc81587ec7" (UID: "a4f06343-b1d2-4394-8271-febc81587ec7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:29 crc kubenswrapper[4937]: I0123 06:57:29.342859 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4f06343-b1d2-4394-8271-febc81587ec7-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:29 crc kubenswrapper[4937]: I0123 06:57:29.342967 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4f06343-b1d2-4394-8271-febc81587ec7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:29 crc kubenswrapper[4937]: I0123 06:57:29.343020 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb72r\" (UniqueName: \"kubernetes.io/projected/a4f06343-b1d2-4394-8271-febc81587ec7-kube-api-access-cb72r\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.133859 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.136799 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"668f3ab2-5d61-4919-afa3-356a1a061499","Type":"ContainerStarted","Data":"cbd80fc7042b0f949260ae60347be7a1b0202d657591b7c977c6c39927eb9274"} Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.137823 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.170944 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.170924474 podStartE2EDuration="2.170924474s" podCreationTimestamp="2026-01-23 06:57:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:57:30.167273536 +0000 UTC m=+1449.971040209" watchObservedRunningTime="2026-01-23 06:57:30.170924474 +0000 UTC m=+1449.974691137" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.209944 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.239129 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.254838 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:57:30 crc kubenswrapper[4937]: E0123 06:57:30.255267 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4f06343-b1d2-4394-8271-febc81587ec7" containerName="nova-scheduler-scheduler" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.255284 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4f06343-b1d2-4394-8271-febc81587ec7" containerName="nova-scheduler-scheduler" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.255512 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4f06343-b1d2-4394-8271-febc81587ec7" containerName="nova-scheduler-scheduler" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.256186 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.258627 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.271522 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.364903 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdefd02d-de3a-421a-8f1b-d5a121658469-config-data\") pod \"nova-scheduler-0\" (UID: \"fdefd02d-de3a-421a-8f1b-d5a121658469\") " pod="openstack/nova-scheduler-0" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.365022 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdefd02d-de3a-421a-8f1b-d5a121658469-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fdefd02d-de3a-421a-8f1b-d5a121658469\") " pod="openstack/nova-scheduler-0" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.365096 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8l66\" (UniqueName: \"kubernetes.io/projected/fdefd02d-de3a-421a-8f1b-d5a121658469-kube-api-access-h8l66\") pod \"nova-scheduler-0\" (UID: \"fdefd02d-de3a-421a-8f1b-d5a121658469\") " pod="openstack/nova-scheduler-0" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.466532 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8l66\" (UniqueName: \"kubernetes.io/projected/fdefd02d-de3a-421a-8f1b-d5a121658469-kube-api-access-h8l66\") pod \"nova-scheduler-0\" (UID: \"fdefd02d-de3a-421a-8f1b-d5a121658469\") " pod="openstack/nova-scheduler-0" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.466693 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdefd02d-de3a-421a-8f1b-d5a121658469-config-data\") pod \"nova-scheduler-0\" (UID: \"fdefd02d-de3a-421a-8f1b-d5a121658469\") " pod="openstack/nova-scheduler-0" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.466864 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdefd02d-de3a-421a-8f1b-d5a121658469-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fdefd02d-de3a-421a-8f1b-d5a121658469\") " pod="openstack/nova-scheduler-0" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.472308 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdefd02d-de3a-421a-8f1b-d5a121658469-config-data\") pod \"nova-scheduler-0\" (UID: \"fdefd02d-de3a-421a-8f1b-d5a121658469\") " pod="openstack/nova-scheduler-0" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.487030 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.487202 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.488885 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdefd02d-de3a-421a-8f1b-d5a121658469-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fdefd02d-de3a-421a-8f1b-d5a121658469\") " pod="openstack/nova-scheduler-0" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.489441 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8l66\" (UniqueName: \"kubernetes.io/projected/fdefd02d-de3a-421a-8f1b-d5a121658469-kube-api-access-h8l66\") pod \"nova-scheduler-0\" (UID: \"fdefd02d-de3a-421a-8f1b-d5a121658469\") " pod="openstack/nova-scheduler-0" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.541690 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4f06343-b1d2-4394-8271-febc81587ec7" path="/var/lib/kubelet/pods/a4f06343-b1d2-4394-8271-febc81587ec7/volumes" Jan 23 06:57:30 crc kubenswrapper[4937]: I0123 06:57:30.580088 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:57:31 crc kubenswrapper[4937]: I0123 06:57:31.088527 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:57:31 crc kubenswrapper[4937]: I0123 06:57:31.147919 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdefd02d-de3a-421a-8f1b-d5a121658469","Type":"ContainerStarted","Data":"4dc17d8281124f377f8f3e64850c24f6926e73a14c0dadc6461fa0c0cd624596"} Jan 23 06:57:32 crc kubenswrapper[4937]: I0123 06:57:32.163718 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdefd02d-de3a-421a-8f1b-d5a121658469","Type":"ContainerStarted","Data":"6b4ab85b29add7dcdc2466c645a803fd92df5f3f174f69e5a7844c60535b67f8"} Jan 23 06:57:35 crc kubenswrapper[4937]: I0123 06:57:35.403879 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 06:57:35 crc kubenswrapper[4937]: I0123 06:57:35.435977 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=5.435953293 podStartE2EDuration="5.435953293s" podCreationTimestamp="2026-01-23 06:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:57:32.186943294 +0000 UTC m=+1451.990709987" watchObservedRunningTime="2026-01-23 06:57:35.435953293 +0000 UTC m=+1455.239719956" Jan 23 06:57:35 crc kubenswrapper[4937]: I0123 06:57:35.487703 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 06:57:35 crc kubenswrapper[4937]: I0123 06:57:35.488060 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 06:57:35 crc kubenswrapper[4937]: I0123 06:57:35.580481 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 06:57:36 crc kubenswrapper[4937]: I0123 06:57:36.501819 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c99a5f22-8b8f-4e04-ac09-199c03bb1630" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 06:57:36 crc kubenswrapper[4937]: I0123 06:57:36.501866 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c99a5f22-8b8f-4e04-ac09-199c03bb1630" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 06:57:36 crc kubenswrapper[4937]: I0123 06:57:36.503280 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 06:57:36 crc kubenswrapper[4937]: I0123 06:57:36.503321 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 06:57:36 crc kubenswrapper[4937]: I0123 06:57:36.873382 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vr4mv"] Jan 23 06:57:36 crc kubenswrapper[4937]: I0123 06:57:36.876124 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vr4mv" Jan 23 06:57:36 crc kubenswrapper[4937]: I0123 06:57:36.886823 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vr4mv"] Jan 23 06:57:37 crc kubenswrapper[4937]: I0123 06:57:37.007242 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/222dfe15-3dc2-4157-9648-9f64e79dcf1c-utilities\") pod \"redhat-operators-vr4mv\" (UID: \"222dfe15-3dc2-4157-9648-9f64e79dcf1c\") " pod="openshift-marketplace/redhat-operators-vr4mv" Jan 23 06:57:37 crc kubenswrapper[4937]: I0123 06:57:37.007307 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25b55\" (UniqueName: \"kubernetes.io/projected/222dfe15-3dc2-4157-9648-9f64e79dcf1c-kube-api-access-25b55\") pod \"redhat-operators-vr4mv\" (UID: \"222dfe15-3dc2-4157-9648-9f64e79dcf1c\") " pod="openshift-marketplace/redhat-operators-vr4mv" Jan 23 06:57:37 crc kubenswrapper[4937]: I0123 06:57:37.007337 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/222dfe15-3dc2-4157-9648-9f64e79dcf1c-catalog-content\") pod \"redhat-operators-vr4mv\" (UID: \"222dfe15-3dc2-4157-9648-9f64e79dcf1c\") " pod="openshift-marketplace/redhat-operators-vr4mv" Jan 23 06:57:37 crc kubenswrapper[4937]: I0123 06:57:37.108904 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/222dfe15-3dc2-4157-9648-9f64e79dcf1c-utilities\") pod \"redhat-operators-vr4mv\" (UID: \"222dfe15-3dc2-4157-9648-9f64e79dcf1c\") " pod="openshift-marketplace/redhat-operators-vr4mv" Jan 23 06:57:37 crc kubenswrapper[4937]: I0123 06:57:37.108987 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25b55\" (UniqueName: \"kubernetes.io/projected/222dfe15-3dc2-4157-9648-9f64e79dcf1c-kube-api-access-25b55\") pod \"redhat-operators-vr4mv\" (UID: \"222dfe15-3dc2-4157-9648-9f64e79dcf1c\") " pod="openshift-marketplace/redhat-operators-vr4mv" Jan 23 06:57:37 crc kubenswrapper[4937]: I0123 06:57:37.109036 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/222dfe15-3dc2-4157-9648-9f64e79dcf1c-catalog-content\") pod \"redhat-operators-vr4mv\" (UID: \"222dfe15-3dc2-4157-9648-9f64e79dcf1c\") " pod="openshift-marketplace/redhat-operators-vr4mv" Jan 23 06:57:37 crc kubenswrapper[4937]: I0123 06:57:37.109434 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/222dfe15-3dc2-4157-9648-9f64e79dcf1c-utilities\") pod \"redhat-operators-vr4mv\" (UID: \"222dfe15-3dc2-4157-9648-9f64e79dcf1c\") " pod="openshift-marketplace/redhat-operators-vr4mv" Jan 23 06:57:37 crc kubenswrapper[4937]: I0123 06:57:37.109765 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/222dfe15-3dc2-4157-9648-9f64e79dcf1c-catalog-content\") pod \"redhat-operators-vr4mv\" (UID: \"222dfe15-3dc2-4157-9648-9f64e79dcf1c\") " pod="openshift-marketplace/redhat-operators-vr4mv" Jan 23 06:57:37 crc kubenswrapper[4937]: I0123 06:57:37.132839 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25b55\" (UniqueName: \"kubernetes.io/projected/222dfe15-3dc2-4157-9648-9f64e79dcf1c-kube-api-access-25b55\") pod \"redhat-operators-vr4mv\" (UID: \"222dfe15-3dc2-4157-9648-9f64e79dcf1c\") " pod="openshift-marketplace/redhat-operators-vr4mv" Jan 23 06:57:37 crc kubenswrapper[4937]: I0123 06:57:37.206503 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vr4mv" Jan 23 06:57:37 crc kubenswrapper[4937]: I0123 06:57:37.587743 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d2911e95-df39-40f3-932c-e2dcbe84491a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:57:37 crc kubenswrapper[4937]: I0123 06:57:37.587866 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d2911e95-df39-40f3-932c-e2dcbe84491a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:57:37 crc kubenswrapper[4937]: I0123 06:57:37.726178 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:57:37 crc kubenswrapper[4937]: I0123 06:57:37.726243 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:57:37 crc kubenswrapper[4937]: I0123 06:57:37.738857 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vr4mv"] Jan 23 06:57:38 crc kubenswrapper[4937]: I0123 06:57:38.238682 4937 generic.go:334] "Generic (PLEG): container finished" podID="222dfe15-3dc2-4157-9648-9f64e79dcf1c" containerID="87336ba32679678615c9d0cfc009f4c4538ec0ce6dc0c9349a07d6496f47a05b" exitCode=0 Jan 23 06:57:38 crc kubenswrapper[4937]: I0123 06:57:38.238731 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vr4mv" event={"ID":"222dfe15-3dc2-4157-9648-9f64e79dcf1c","Type":"ContainerDied","Data":"87336ba32679678615c9d0cfc009f4c4538ec0ce6dc0c9349a07d6496f47a05b"} Jan 23 06:57:38 crc kubenswrapper[4937]: I0123 06:57:38.238761 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vr4mv" event={"ID":"222dfe15-3dc2-4157-9648-9f64e79dcf1c","Type":"ContainerStarted","Data":"79593d35e5abba4e73be16edd98266c85cfe3c401dfb9d45b4edf4e914171c42"} Jan 23 06:57:38 crc kubenswrapper[4937]: I0123 06:57:38.522346 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 23 06:57:39 crc kubenswrapper[4937]: I0123 06:57:39.917380 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 06:57:39 crc kubenswrapper[4937]: I0123 06:57:39.918091 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="75b79f91-7f35-4e37-9fd8-2ada0ad723df" containerName="kube-state-metrics" containerID="cri-o://b8034b3c2d6a67363e1bbec2a25bf6af3e7cbdf04f1e0837a07328435580992d" gracePeriod=30 Jan 23 06:57:40 crc kubenswrapper[4937]: I0123 06:57:40.275190 4937 generic.go:334] "Generic (PLEG): container finished" podID="75b79f91-7f35-4e37-9fd8-2ada0ad723df" containerID="b8034b3c2d6a67363e1bbec2a25bf6af3e7cbdf04f1e0837a07328435580992d" exitCode=2 Jan 23 06:57:40 crc kubenswrapper[4937]: I0123 06:57:40.275525 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"75b79f91-7f35-4e37-9fd8-2ada0ad723df","Type":"ContainerDied","Data":"b8034b3c2d6a67363e1bbec2a25bf6af3e7cbdf04f1e0837a07328435580992d"} Jan 23 06:57:40 crc kubenswrapper[4937]: I0123 06:57:40.277029 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vr4mv" event={"ID":"222dfe15-3dc2-4157-9648-9f64e79dcf1c","Type":"ContainerStarted","Data":"4edaecd0d60cb4f5ab9cde312fbf2eb1fc9dc6393654ae7122d6b1f22e32b1e2"} Jan 23 06:57:40 crc kubenswrapper[4937]: I0123 06:57:40.580669 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 06:57:40 crc kubenswrapper[4937]: I0123 06:57:40.606476 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 06:57:40 crc kubenswrapper[4937]: I0123 06:57:40.924553 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 06:57:40 crc kubenswrapper[4937]: I0123 06:57:40.991469 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g87q\" (UniqueName: \"kubernetes.io/projected/75b79f91-7f35-4e37-9fd8-2ada0ad723df-kube-api-access-6g87q\") pod \"75b79f91-7f35-4e37-9fd8-2ada0ad723df\" (UID: \"75b79f91-7f35-4e37-9fd8-2ada0ad723df\") " Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.017407 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b79f91-7f35-4e37-9fd8-2ada0ad723df-kube-api-access-6g87q" (OuterVolumeSpecName: "kube-api-access-6g87q") pod "75b79f91-7f35-4e37-9fd8-2ada0ad723df" (UID: "75b79f91-7f35-4e37-9fd8-2ada0ad723df"). InnerVolumeSpecName "kube-api-access-6g87q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.093640 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g87q\" (UniqueName: \"kubernetes.io/projected/75b79f91-7f35-4e37-9fd8-2ada0ad723df-kube-api-access-6g87q\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.295303 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"75b79f91-7f35-4e37-9fd8-2ada0ad723df","Type":"ContainerDied","Data":"a830b1ba474c5f44c7b103ea82448fb0667d9c9dcbe6fa78629342bea96704f0"} Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.295391 4937 scope.go:117] "RemoveContainer" containerID="b8034b3c2d6a67363e1bbec2a25bf6af3e7cbdf04f1e0837a07328435580992d" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.296847 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.329360 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.350986 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.386454 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.416557 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 06:57:41 crc kubenswrapper[4937]: E0123 06:57:41.424807 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b79f91-7f35-4e37-9fd8-2ada0ad723df" containerName="kube-state-metrics" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.424850 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b79f91-7f35-4e37-9fd8-2ada0ad723df" containerName="kube-state-metrics" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.425387 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b79f91-7f35-4e37-9fd8-2ada0ad723df" containerName="kube-state-metrics" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.426205 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.435274 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.435296 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.454004 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.504432 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7d117739-79d1-4b7d-9b78-331dd0af4a9e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7d117739-79d1-4b7d-9b78-331dd0af4a9e\") " pod="openstack/kube-state-metrics-0" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.504569 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj9mh\" (UniqueName: \"kubernetes.io/projected/7d117739-79d1-4b7d-9b78-331dd0af4a9e-kube-api-access-wj9mh\") pod \"kube-state-metrics-0\" (UID: \"7d117739-79d1-4b7d-9b78-331dd0af4a9e\") " pod="openstack/kube-state-metrics-0" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.504587 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d117739-79d1-4b7d-9b78-331dd0af4a9e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7d117739-79d1-4b7d-9b78-331dd0af4a9e\") " pod="openstack/kube-state-metrics-0" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.504673 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d117739-79d1-4b7d-9b78-331dd0af4a9e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7d117739-79d1-4b7d-9b78-331dd0af4a9e\") " pod="openstack/kube-state-metrics-0" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.606393 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj9mh\" (UniqueName: \"kubernetes.io/projected/7d117739-79d1-4b7d-9b78-331dd0af4a9e-kube-api-access-wj9mh\") pod \"kube-state-metrics-0\" (UID: \"7d117739-79d1-4b7d-9b78-331dd0af4a9e\") " pod="openstack/kube-state-metrics-0" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.606455 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d117739-79d1-4b7d-9b78-331dd0af4a9e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7d117739-79d1-4b7d-9b78-331dd0af4a9e\") " pod="openstack/kube-state-metrics-0" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.607202 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d117739-79d1-4b7d-9b78-331dd0af4a9e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7d117739-79d1-4b7d-9b78-331dd0af4a9e\") " pod="openstack/kube-state-metrics-0" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.607253 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7d117739-79d1-4b7d-9b78-331dd0af4a9e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7d117739-79d1-4b7d-9b78-331dd0af4a9e\") " pod="openstack/kube-state-metrics-0" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.611151 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d117739-79d1-4b7d-9b78-331dd0af4a9e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7d117739-79d1-4b7d-9b78-331dd0af4a9e\") " pod="openstack/kube-state-metrics-0" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.611611 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7d117739-79d1-4b7d-9b78-331dd0af4a9e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7d117739-79d1-4b7d-9b78-331dd0af4a9e\") " pod="openstack/kube-state-metrics-0" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.625733 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d117739-79d1-4b7d-9b78-331dd0af4a9e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7d117739-79d1-4b7d-9b78-331dd0af4a9e\") " pod="openstack/kube-state-metrics-0" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.631905 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj9mh\" (UniqueName: \"kubernetes.io/projected/7d117739-79d1-4b7d-9b78-331dd0af4a9e-kube-api-access-wj9mh\") pod \"kube-state-metrics-0\" (UID: \"7d117739-79d1-4b7d-9b78-331dd0af4a9e\") " pod="openstack/kube-state-metrics-0" Jan 23 06:57:41 crc kubenswrapper[4937]: I0123 06:57:41.759641 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 06:57:42 crc kubenswrapper[4937]: I0123 06:57:42.012753 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:57:42 crc kubenswrapper[4937]: I0123 06:57:42.013364 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ddce763-a91f-4529-b73a-262f9d020da1" containerName="ceilometer-central-agent" containerID="cri-o://909dc20239e5d83f0a83f6d5c33ed13b3a02b383cb323e048bdac35639476fca" gracePeriod=30 Jan 23 06:57:42 crc kubenswrapper[4937]: I0123 06:57:42.013512 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ddce763-a91f-4529-b73a-262f9d020da1" containerName="proxy-httpd" containerID="cri-o://4bbbc3f2700785d4b7ed7046358ecb04efebd765ee73f90e906aeb3a05e67126" gracePeriod=30 Jan 23 06:57:42 crc kubenswrapper[4937]: I0123 06:57:42.013560 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ddce763-a91f-4529-b73a-262f9d020da1" containerName="sg-core" containerID="cri-o://30db52f103a940e12c4f31d4a915913a1aa105539d3c21da162885ca46e4618f" gracePeriod=30 Jan 23 06:57:42 crc kubenswrapper[4937]: I0123 06:57:42.013626 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5ddce763-a91f-4529-b73a-262f9d020da1" containerName="ceilometer-notification-agent" containerID="cri-o://a7e9306a302e9bf5ffb0691f5458acaefe94468bb3d34107c55c6a5602bc9064" gracePeriod=30 Jan 23 06:57:42 crc kubenswrapper[4937]: I0123 06:57:42.270751 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 06:57:42 crc kubenswrapper[4937]: I0123 06:57:42.306498 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7d117739-79d1-4b7d-9b78-331dd0af4a9e","Type":"ContainerStarted","Data":"2cc4b1742ed88c82051d5fd99497867ab628945d7ce7b3b5ea4eed26aceae95e"} Jan 23 06:57:42 crc kubenswrapper[4937]: I0123 06:57:42.310342 4937 generic.go:334] "Generic (PLEG): container finished" podID="222dfe15-3dc2-4157-9648-9f64e79dcf1c" containerID="4edaecd0d60cb4f5ab9cde312fbf2eb1fc9dc6393654ae7122d6b1f22e32b1e2" exitCode=0 Jan 23 06:57:42 crc kubenswrapper[4937]: I0123 06:57:42.310408 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vr4mv" event={"ID":"222dfe15-3dc2-4157-9648-9f64e79dcf1c","Type":"ContainerDied","Data":"4edaecd0d60cb4f5ab9cde312fbf2eb1fc9dc6393654ae7122d6b1f22e32b1e2"} Jan 23 06:57:42 crc kubenswrapper[4937]: I0123 06:57:42.541989 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b79f91-7f35-4e37-9fd8-2ada0ad723df" path="/var/lib/kubelet/pods/75b79f91-7f35-4e37-9fd8-2ada0ad723df/volumes" Jan 23 06:57:43 crc kubenswrapper[4937]: I0123 06:57:43.328329 4937 generic.go:334] "Generic (PLEG): container finished" podID="5ddce763-a91f-4529-b73a-262f9d020da1" containerID="30db52f103a940e12c4f31d4a915913a1aa105539d3c21da162885ca46e4618f" exitCode=2 Jan 23 06:57:43 crc kubenswrapper[4937]: I0123 06:57:43.328727 4937 generic.go:334] "Generic (PLEG): container finished" podID="5ddce763-a91f-4529-b73a-262f9d020da1" containerID="909dc20239e5d83f0a83f6d5c33ed13b3a02b383cb323e048bdac35639476fca" exitCode=0 Jan 23 06:57:43 crc kubenswrapper[4937]: I0123 06:57:43.328402 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddce763-a91f-4529-b73a-262f9d020da1","Type":"ContainerDied","Data":"30db52f103a940e12c4f31d4a915913a1aa105539d3c21da162885ca46e4618f"} Jan 23 06:57:43 crc kubenswrapper[4937]: I0123 06:57:43.328827 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddce763-a91f-4529-b73a-262f9d020da1","Type":"ContainerDied","Data":"909dc20239e5d83f0a83f6d5c33ed13b3a02b383cb323e048bdac35639476fca"} Jan 23 06:57:43 crc kubenswrapper[4937]: I0123 06:57:43.332708 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7d117739-79d1-4b7d-9b78-331dd0af4a9e","Type":"ContainerStarted","Data":"0a7404fc65dc6dae973253c1cd3f17534f617e65a0887507d71a64fe5a284d14"} Jan 23 06:57:43 crc kubenswrapper[4937]: I0123 06:57:43.332928 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 06:57:43 crc kubenswrapper[4937]: I0123 06:57:43.353466 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.575874422 podStartE2EDuration="2.353448376s" podCreationTimestamp="2026-01-23 06:57:41 +0000 UTC" firstStartedPulling="2026-01-23 06:57:42.271494125 +0000 UTC m=+1462.075260778" lastFinishedPulling="2026-01-23 06:57:43.049068079 +0000 UTC m=+1462.852834732" observedRunningTime="2026-01-23 06:57:43.344910754 +0000 UTC m=+1463.148677397" watchObservedRunningTime="2026-01-23 06:57:43.353448376 +0000 UTC m=+1463.157215029" Jan 23 06:57:44 crc kubenswrapper[4937]: I0123 06:57:44.343815 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vr4mv" event={"ID":"222dfe15-3dc2-4157-9648-9f64e79dcf1c","Type":"ContainerStarted","Data":"965994ae143ba897cec2fcb8dc27bf5e9b5bf3c7f6cf481bcdce95c7efa032a2"} Jan 23 06:57:44 crc kubenswrapper[4937]: I0123 06:57:44.347364 4937 generic.go:334] "Generic (PLEG): container finished" podID="5ddce763-a91f-4529-b73a-262f9d020da1" containerID="4bbbc3f2700785d4b7ed7046358ecb04efebd765ee73f90e906aeb3a05e67126" exitCode=0 Jan 23 06:57:44 crc kubenswrapper[4937]: I0123 06:57:44.348095 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddce763-a91f-4529-b73a-262f9d020da1","Type":"ContainerDied","Data":"4bbbc3f2700785d4b7ed7046358ecb04efebd765ee73f90e906aeb3a05e67126"} Jan 23 06:57:44 crc kubenswrapper[4937]: I0123 06:57:44.368844 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vr4mv" podStartSLOduration=3.450552411 podStartE2EDuration="8.368825581s" podCreationTimestamp="2026-01-23 06:57:36 +0000 UTC" firstStartedPulling="2026-01-23 06:57:38.24182185 +0000 UTC m=+1458.045588503" lastFinishedPulling="2026-01-23 06:57:43.16009502 +0000 UTC m=+1462.963861673" observedRunningTime="2026-01-23 06:57:44.362745706 +0000 UTC m=+1464.166512379" watchObservedRunningTime="2026-01-23 06:57:44.368825581 +0000 UTC m=+1464.172592244" Jan 23 06:57:45 crc kubenswrapper[4937]: I0123 06:57:45.493552 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 06:57:45 crc kubenswrapper[4937]: I0123 06:57:45.494184 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 06:57:45 crc kubenswrapper[4937]: I0123 06:57:45.498375 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 06:57:46 crc kubenswrapper[4937]: I0123 06:57:46.373958 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 06:57:46 crc kubenswrapper[4937]: I0123 06:57:46.512629 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 06:57:46 crc kubenswrapper[4937]: I0123 06:57:46.513262 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 06:57:46 crc kubenswrapper[4937]: I0123 06:57:46.519477 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 06:57:46 crc kubenswrapper[4937]: I0123 06:57:46.524188 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.207259 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vr4mv" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.207583 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vr4mv" Jan 23 06:57:47 crc kubenswrapper[4937]: E0123 06:57:47.258402 4937 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75b79f91_7f35_4e37_9fd8_2ada0ad723df.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75b79f91_7f35_4e37_9fd8_2ada0ad723df.slice/crio-a830b1ba474c5f44c7b103ea82448fb0667d9c9dcbe6fa78629342bea96704f0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75b79f91_7f35_4e37_9fd8_2ada0ad723df.slice/crio-conmon-b8034b3c2d6a67363e1bbec2a25bf6af3e7cbdf04f1e0837a07328435580992d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ddce763_a91f_4529_b73a_262f9d020da1.slice/crio-conmon-4bbbc3f2700785d4b7ed7046358ecb04efebd765ee73f90e906aeb3a05e67126.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ddce763_a91f_4529_b73a_262f9d020da1.slice/crio-30db52f103a940e12c4f31d4a915913a1aa105539d3c21da162885ca46e4618f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ddce763_a91f_4529_b73a_262f9d020da1.slice/crio-909dc20239e5d83f0a83f6d5c33ed13b3a02b383cb323e048bdac35639476fca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75b79f91_7f35_4e37_9fd8_2ada0ad723df.slice/crio-b8034b3c2d6a67363e1bbec2a25bf6af3e7cbdf04f1e0837a07328435580992d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1c1a51b_b5cd_454a_bbb0_c284f595772f.slice/crio-48ebff1084c02f97a9ad01a19cf4684027ae41c9a7f995ad91aca9ee98bf1146.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ddce763_a91f_4529_b73a_262f9d020da1.slice/crio-4bbbc3f2700785d4b7ed7046358ecb04efebd765ee73f90e906aeb3a05e67126.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ddce763_a91f_4529_b73a_262f9d020da1.slice/crio-conmon-30db52f103a940e12c4f31d4a915913a1aa105539d3c21da162885ca46e4618f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1c1a51b_b5cd_454a_bbb0_c284f595772f.slice/crio-conmon-48ebff1084c02f97a9ad01a19cf4684027ae41c9a7f995ad91aca9ee98bf1146.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ddce763_a91f_4529_b73a_262f9d020da1.slice/crio-conmon-909dc20239e5d83f0a83f6d5c33ed13b3a02b383cb323e048bdac35639476fca.scope\": RecentStats: unable to find data in memory cache]" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.303498 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.381113 4937 generic.go:334] "Generic (PLEG): container finished" podID="a1c1a51b-b5cd-454a-bbb0-c284f595772f" containerID="48ebff1084c02f97a9ad01a19cf4684027ae41c9a7f995ad91aca9ee98bf1146" exitCode=137 Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.382590 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.383172 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a1c1a51b-b5cd-454a-bbb0-c284f595772f","Type":"ContainerDied","Data":"48ebff1084c02f97a9ad01a19cf4684027ae41c9a7f995ad91aca9ee98bf1146"} Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.383216 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.383232 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a1c1a51b-b5cd-454a-bbb0-c284f595772f","Type":"ContainerDied","Data":"033adb405ac045210dd1bdfecf7e5bef33abb09d374c4a66c0f900111c581b44"} Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.383252 4937 scope.go:117] "RemoveContainer" containerID="48ebff1084c02f97a9ad01a19cf4684027ae41c9a7f995ad91aca9ee98bf1146" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.392272 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.423367 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c1a51b-b5cd-454a-bbb0-c284f595772f-config-data\") pod \"a1c1a51b-b5cd-454a-bbb0-c284f595772f\" (UID: \"a1c1a51b-b5cd-454a-bbb0-c284f595772f\") " Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.423515 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n64rl\" (UniqueName: \"kubernetes.io/projected/a1c1a51b-b5cd-454a-bbb0-c284f595772f-kube-api-access-n64rl\") pod \"a1c1a51b-b5cd-454a-bbb0-c284f595772f\" (UID: \"a1c1a51b-b5cd-454a-bbb0-c284f595772f\") " Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.423576 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c1a51b-b5cd-454a-bbb0-c284f595772f-combined-ca-bundle\") pod \"a1c1a51b-b5cd-454a-bbb0-c284f595772f\" (UID: \"a1c1a51b-b5cd-454a-bbb0-c284f595772f\") " Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.446459 4937 scope.go:117] "RemoveContainer" containerID="48ebff1084c02f97a9ad01a19cf4684027ae41c9a7f995ad91aca9ee98bf1146" Jan 23 06:57:47 crc kubenswrapper[4937]: E0123 06:57:47.459768 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48ebff1084c02f97a9ad01a19cf4684027ae41c9a7f995ad91aca9ee98bf1146\": container with ID starting with 48ebff1084c02f97a9ad01a19cf4684027ae41c9a7f995ad91aca9ee98bf1146 not found: ID does not exist" containerID="48ebff1084c02f97a9ad01a19cf4684027ae41c9a7f995ad91aca9ee98bf1146" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.459830 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48ebff1084c02f97a9ad01a19cf4684027ae41c9a7f995ad91aca9ee98bf1146"} err="failed to get container status \"48ebff1084c02f97a9ad01a19cf4684027ae41c9a7f995ad91aca9ee98bf1146\": rpc error: code = NotFound desc = could not find container \"48ebff1084c02f97a9ad01a19cf4684027ae41c9a7f995ad91aca9ee98bf1146\": container with ID starting with 48ebff1084c02f97a9ad01a19cf4684027ae41c9a7f995ad91aca9ee98bf1146 not found: ID does not exist" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.500828 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1c1a51b-b5cd-454a-bbb0-c284f595772f-kube-api-access-n64rl" (OuterVolumeSpecName: "kube-api-access-n64rl") pod "a1c1a51b-b5cd-454a-bbb0-c284f595772f" (UID: "a1c1a51b-b5cd-454a-bbb0-c284f595772f"). InnerVolumeSpecName "kube-api-access-n64rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.528936 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n64rl\" (UniqueName: \"kubernetes.io/projected/a1c1a51b-b5cd-454a-bbb0-c284f595772f-kube-api-access-n64rl\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.532070 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1c1a51b-b5cd-454a-bbb0-c284f595772f-config-data" (OuterVolumeSpecName: "config-data") pod "a1c1a51b-b5cd-454a-bbb0-c284f595772f" (UID: "a1c1a51b-b5cd-454a-bbb0-c284f595772f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.543349 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1c1a51b-b5cd-454a-bbb0-c284f595772f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1c1a51b-b5cd-454a-bbb0-c284f595772f" (UID: "a1c1a51b-b5cd-454a-bbb0-c284f595772f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.596724 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68b68f87bf-ggngd"] Jan 23 06:57:47 crc kubenswrapper[4937]: E0123 06:57:47.597179 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c1a51b-b5cd-454a-bbb0-c284f595772f" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.597192 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c1a51b-b5cd-454a-bbb0-c284f595772f" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.597375 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1c1a51b-b5cd-454a-bbb0-c284f595772f" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.601237 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.615767 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68b68f87bf-ggngd"] Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.632105 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c1a51b-b5cd-454a-bbb0-c284f595772f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.632142 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c1a51b-b5cd-454a-bbb0-c284f595772f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.718129 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.729723 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.734270 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-ovsdbserver-sb\") pod \"dnsmasq-dns-68b68f87bf-ggngd\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.734377 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4jcr\" (UniqueName: \"kubernetes.io/projected/242fc90c-f61c-4f85-935a-17e7f81e2026-kube-api-access-k4jcr\") pod \"dnsmasq-dns-68b68f87bf-ggngd\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.734419 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-config\") pod \"dnsmasq-dns-68b68f87bf-ggngd\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.734472 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-ovsdbserver-nb\") pod \"dnsmasq-dns-68b68f87bf-ggngd\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.734535 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-dns-swift-storage-0\") pod \"dnsmasq-dns-68b68f87bf-ggngd\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.734585 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-dns-svc\") pod \"dnsmasq-dns-68b68f87bf-ggngd\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.741528 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.743229 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.747709 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.747745 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.748112 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.767096 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.837097 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmhtm\" (UniqueName: \"kubernetes.io/projected/0e59322d-6b8f-4c12-923e-b008c85e99ef-kube-api-access-jmhtm\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e59322d-6b8f-4c12-923e-b008c85e99ef\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.837169 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4jcr\" (UniqueName: \"kubernetes.io/projected/242fc90c-f61c-4f85-935a-17e7f81e2026-kube-api-access-k4jcr\") pod \"dnsmasq-dns-68b68f87bf-ggngd\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.837214 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-config\") pod \"dnsmasq-dns-68b68f87bf-ggngd\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.837261 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e59322d-6b8f-4c12-923e-b008c85e99ef-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e59322d-6b8f-4c12-923e-b008c85e99ef\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.837295 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-ovsdbserver-nb\") pod \"dnsmasq-dns-68b68f87bf-ggngd\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.837458 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-dns-swift-storage-0\") pod \"dnsmasq-dns-68b68f87bf-ggngd\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.837490 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e59322d-6b8f-4c12-923e-b008c85e99ef-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e59322d-6b8f-4c12-923e-b008c85e99ef\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.838208 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-config\") pod \"dnsmasq-dns-68b68f87bf-ggngd\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.838431 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-ovsdbserver-nb\") pod \"dnsmasq-dns-68b68f87bf-ggngd\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.838440 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-dns-swift-storage-0\") pod \"dnsmasq-dns-68b68f87bf-ggngd\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.838546 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e59322d-6b8f-4c12-923e-b008c85e99ef-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e59322d-6b8f-4c12-923e-b008c85e99ef\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.838585 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-dns-svc\") pod \"dnsmasq-dns-68b68f87bf-ggngd\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.838673 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e59322d-6b8f-4c12-923e-b008c85e99ef-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e59322d-6b8f-4c12-923e-b008c85e99ef\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.838741 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-ovsdbserver-sb\") pod \"dnsmasq-dns-68b68f87bf-ggngd\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.839528 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-ovsdbserver-sb\") pod \"dnsmasq-dns-68b68f87bf-ggngd\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.840217 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-dns-svc\") pod \"dnsmasq-dns-68b68f87bf-ggngd\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.866437 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4jcr\" (UniqueName: \"kubernetes.io/projected/242fc90c-f61c-4f85-935a-17e7f81e2026-kube-api-access-k4jcr\") pod \"dnsmasq-dns-68b68f87bf-ggngd\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.938091 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.940408 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e59322d-6b8f-4c12-923e-b008c85e99ef-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e59322d-6b8f-4c12-923e-b008c85e99ef\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.940455 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e59322d-6b8f-4c12-923e-b008c85e99ef-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e59322d-6b8f-4c12-923e-b008c85e99ef\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.940497 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e59322d-6b8f-4c12-923e-b008c85e99ef-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e59322d-6b8f-4c12-923e-b008c85e99ef\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.940561 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmhtm\" (UniqueName: \"kubernetes.io/projected/0e59322d-6b8f-4c12-923e-b008c85e99ef-kube-api-access-jmhtm\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e59322d-6b8f-4c12-923e-b008c85e99ef\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.940627 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e59322d-6b8f-4c12-923e-b008c85e99ef-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e59322d-6b8f-4c12-923e-b008c85e99ef\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.944186 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e59322d-6b8f-4c12-923e-b008c85e99ef-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e59322d-6b8f-4c12-923e-b008c85e99ef\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.944839 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e59322d-6b8f-4c12-923e-b008c85e99ef-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e59322d-6b8f-4c12-923e-b008c85e99ef\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.945215 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e59322d-6b8f-4c12-923e-b008c85e99ef-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e59322d-6b8f-4c12-923e-b008c85e99ef\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.946859 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e59322d-6b8f-4c12-923e-b008c85e99ef-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e59322d-6b8f-4c12-923e-b008c85e99ef\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:47 crc kubenswrapper[4937]: I0123 06:57:47.964132 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmhtm\" (UniqueName: \"kubernetes.io/projected/0e59322d-6b8f-4c12-923e-b008c85e99ef-kube-api-access-jmhtm\") pod \"nova-cell1-novncproxy-0\" (UID: \"0e59322d-6b8f-4c12-923e-b008c85e99ef\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:48 crc kubenswrapper[4937]: I0123 06:57:48.072780 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:48 crc kubenswrapper[4937]: I0123 06:57:48.272804 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vr4mv" podUID="222dfe15-3dc2-4157-9648-9f64e79dcf1c" containerName="registry-server" probeResult="failure" output=< Jan 23 06:57:48 crc kubenswrapper[4937]: timeout: failed to connect service ":50051" within 1s Jan 23 06:57:48 crc kubenswrapper[4937]: > Jan 23 06:57:48 crc kubenswrapper[4937]: I0123 06:57:48.474037 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68b68f87bf-ggngd"] Jan 23 06:57:48 crc kubenswrapper[4937]: I0123 06:57:48.551058 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1c1a51b-b5cd-454a-bbb0-c284f595772f" path="/var/lib/kubelet/pods/a1c1a51b-b5cd-454a-bbb0-c284f595772f/volumes" Jan 23 06:57:48 crc kubenswrapper[4937]: I0123 06:57:48.623973 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 06:57:48 crc kubenswrapper[4937]: I0123 06:57:48.862998 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:57:48 crc kubenswrapper[4937]: I0123 06:57:48.958485 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddce763-a91f-4529-b73a-262f9d020da1-log-httpd\") pod \"5ddce763-a91f-4529-b73a-262f9d020da1\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " Jan 23 06:57:48 crc kubenswrapper[4937]: I0123 06:57:48.958564 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-config-data\") pod \"5ddce763-a91f-4529-b73a-262f9d020da1\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " Jan 23 06:57:48 crc kubenswrapper[4937]: I0123 06:57:48.958721 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-sg-core-conf-yaml\") pod \"5ddce763-a91f-4529-b73a-262f9d020da1\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " Jan 23 06:57:48 crc kubenswrapper[4937]: I0123 06:57:48.959325 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-combined-ca-bundle\") pod \"5ddce763-a91f-4529-b73a-262f9d020da1\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " Jan 23 06:57:48 crc kubenswrapper[4937]: I0123 06:57:48.959366 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-scripts\") pod \"5ddce763-a91f-4529-b73a-262f9d020da1\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " Jan 23 06:57:48 crc kubenswrapper[4937]: I0123 06:57:48.959431 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddce763-a91f-4529-b73a-262f9d020da1-run-httpd\") pod \"5ddce763-a91f-4529-b73a-262f9d020da1\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " Jan 23 06:57:48 crc kubenswrapper[4937]: I0123 06:57:48.959494 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnv49\" (UniqueName: \"kubernetes.io/projected/5ddce763-a91f-4529-b73a-262f9d020da1-kube-api-access-xnv49\") pod \"5ddce763-a91f-4529-b73a-262f9d020da1\" (UID: \"5ddce763-a91f-4529-b73a-262f9d020da1\") " Jan 23 06:57:48 crc kubenswrapper[4937]: I0123 06:57:48.959672 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ddce763-a91f-4529-b73a-262f9d020da1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5ddce763-a91f-4529-b73a-262f9d020da1" (UID: "5ddce763-a91f-4529-b73a-262f9d020da1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:57:48 crc kubenswrapper[4937]: I0123 06:57:48.959846 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ddce763-a91f-4529-b73a-262f9d020da1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5ddce763-a91f-4529-b73a-262f9d020da1" (UID: "5ddce763-a91f-4529-b73a-262f9d020da1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:57:48 crc kubenswrapper[4937]: I0123 06:57:48.960866 4937 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddce763-a91f-4529-b73a-262f9d020da1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:48 crc kubenswrapper[4937]: I0123 06:57:48.960893 4937 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ddce763-a91f-4529-b73a-262f9d020da1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:48 crc kubenswrapper[4937]: I0123 06:57:48.969983 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ddce763-a91f-4529-b73a-262f9d020da1-kube-api-access-xnv49" (OuterVolumeSpecName: "kube-api-access-xnv49") pod "5ddce763-a91f-4529-b73a-262f9d020da1" (UID: "5ddce763-a91f-4529-b73a-262f9d020da1"). InnerVolumeSpecName "kube-api-access-xnv49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:57:48 crc kubenswrapper[4937]: I0123 06:57:48.985737 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-scripts" (OuterVolumeSpecName: "scripts") pod "5ddce763-a91f-4529-b73a-262f9d020da1" (UID: "5ddce763-a91f-4529-b73a-262f9d020da1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.002051 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5ddce763-a91f-4529-b73a-262f9d020da1" (UID: "5ddce763-a91f-4529-b73a-262f9d020da1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.063313 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.063614 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnv49\" (UniqueName: \"kubernetes.io/projected/5ddce763-a91f-4529-b73a-262f9d020da1-kube-api-access-xnv49\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.063631 4937 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.097458 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-config-data" (OuterVolumeSpecName: "config-data") pod "5ddce763-a91f-4529-b73a-262f9d020da1" (UID: "5ddce763-a91f-4529-b73a-262f9d020da1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.113138 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ddce763-a91f-4529-b73a-262f9d020da1" (UID: "5ddce763-a91f-4529-b73a-262f9d020da1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.165587 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.165833 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ddce763-a91f-4529-b73a-262f9d020da1-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.408761 4937 generic.go:334] "Generic (PLEG): container finished" podID="242fc90c-f61c-4f85-935a-17e7f81e2026" containerID="978f10f3e93a87622b35537b7c10b347089019dd359654e799b836edb69645cc" exitCode=0 Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.408865 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" event={"ID":"242fc90c-f61c-4f85-935a-17e7f81e2026","Type":"ContainerDied","Data":"978f10f3e93a87622b35537b7c10b347089019dd359654e799b836edb69645cc"} Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.408929 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" event={"ID":"242fc90c-f61c-4f85-935a-17e7f81e2026","Type":"ContainerStarted","Data":"c836640af9f91ff6225f95367c8a5733a4a4b72dfbbbee4d58232af6f46c1b91"} Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.413080 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0e59322d-6b8f-4c12-923e-b008c85e99ef","Type":"ContainerStarted","Data":"0e558693d605416a091f444d9592b97a99cc078d63fd03058361dd8a3a241751"} Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.413132 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0e59322d-6b8f-4c12-923e-b008c85e99ef","Type":"ContainerStarted","Data":"f45a90d88a0cba1536588eef9638b0eb30361c9d736495ac1939dafd492fc465"} Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.418073 4937 generic.go:334] "Generic (PLEG): container finished" podID="5ddce763-a91f-4529-b73a-262f9d020da1" containerID="a7e9306a302e9bf5ffb0691f5458acaefe94468bb3d34107c55c6a5602bc9064" exitCode=0 Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.418969 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.422747 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddce763-a91f-4529-b73a-262f9d020da1","Type":"ContainerDied","Data":"a7e9306a302e9bf5ffb0691f5458acaefe94468bb3d34107c55c6a5602bc9064"} Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.422912 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ddce763-a91f-4529-b73a-262f9d020da1","Type":"ContainerDied","Data":"cfafb4456254c74edfc50e26a6440d3168fceeac6f9a369dcc02137d3b37ad85"} Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.423011 4937 scope.go:117] "RemoveContainer" containerID="4bbbc3f2700785d4b7ed7046358ecb04efebd765ee73f90e906aeb3a05e67126" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.493021 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.493005907 podStartE2EDuration="2.493005907s" podCreationTimestamp="2026-01-23 06:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:57:49.462410728 +0000 UTC m=+1469.266177381" watchObservedRunningTime="2026-01-23 06:57:49.493005907 +0000 UTC m=+1469.296772550" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.502345 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.521494 4937 scope.go:117] "RemoveContainer" containerID="30db52f103a940e12c4f31d4a915913a1aa105539d3c21da162885ca46e4618f" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.531891 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.543077 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:57:49 crc kubenswrapper[4937]: E0123 06:57:49.543522 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddce763-a91f-4529-b73a-262f9d020da1" containerName="sg-core" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.543542 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddce763-a91f-4529-b73a-262f9d020da1" containerName="sg-core" Jan 23 06:57:49 crc kubenswrapper[4937]: E0123 06:57:49.543563 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddce763-a91f-4529-b73a-262f9d020da1" containerName="ceilometer-notification-agent" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.543570 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddce763-a91f-4529-b73a-262f9d020da1" containerName="ceilometer-notification-agent" Jan 23 06:57:49 crc kubenswrapper[4937]: E0123 06:57:49.543579 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddce763-a91f-4529-b73a-262f9d020da1" containerName="ceilometer-central-agent" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.543632 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddce763-a91f-4529-b73a-262f9d020da1" containerName="ceilometer-central-agent" Jan 23 06:57:49 crc kubenswrapper[4937]: E0123 06:57:49.543651 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddce763-a91f-4529-b73a-262f9d020da1" containerName="proxy-httpd" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.543659 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddce763-a91f-4529-b73a-262f9d020da1" containerName="proxy-httpd" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.543886 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddce763-a91f-4529-b73a-262f9d020da1" containerName="ceilometer-central-agent" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.543913 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddce763-a91f-4529-b73a-262f9d020da1" containerName="sg-core" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.543926 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddce763-a91f-4529-b73a-262f9d020da1" containerName="ceilometer-notification-agent" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.543940 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddce763-a91f-4529-b73a-262f9d020da1" containerName="proxy-httpd" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.545586 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.551229 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.551242 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.552704 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.564510 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.648149 4937 scope.go:117] "RemoveContainer" containerID="a7e9306a302e9bf5ffb0691f5458acaefe94468bb3d34107c55c6a5602bc9064" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.675351 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-config-data\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.675400 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.675476 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4p5q\" (UniqueName: \"kubernetes.io/projected/16893128-887d-469d-bc0d-63ece496c27d-kube-api-access-s4p5q\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.675541 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-scripts\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.675558 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16893128-887d-469d-bc0d-63ece496c27d-run-httpd\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.675609 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16893128-887d-469d-bc0d-63ece496c27d-log-httpd\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.675678 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.675705 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.676258 4937 scope.go:117] "RemoveContainer" containerID="909dc20239e5d83f0a83f6d5c33ed13b3a02b383cb323e048bdac35639476fca" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.710557 4937 scope.go:117] "RemoveContainer" containerID="4bbbc3f2700785d4b7ed7046358ecb04efebd765ee73f90e906aeb3a05e67126" Jan 23 06:57:49 crc kubenswrapper[4937]: E0123 06:57:49.713473 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bbbc3f2700785d4b7ed7046358ecb04efebd765ee73f90e906aeb3a05e67126\": container with ID starting with 4bbbc3f2700785d4b7ed7046358ecb04efebd765ee73f90e906aeb3a05e67126 not found: ID does not exist" containerID="4bbbc3f2700785d4b7ed7046358ecb04efebd765ee73f90e906aeb3a05e67126" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.713534 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bbbc3f2700785d4b7ed7046358ecb04efebd765ee73f90e906aeb3a05e67126"} err="failed to get container status \"4bbbc3f2700785d4b7ed7046358ecb04efebd765ee73f90e906aeb3a05e67126\": rpc error: code = NotFound desc = could not find container \"4bbbc3f2700785d4b7ed7046358ecb04efebd765ee73f90e906aeb3a05e67126\": container with ID starting with 4bbbc3f2700785d4b7ed7046358ecb04efebd765ee73f90e906aeb3a05e67126 not found: ID does not exist" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.713567 4937 scope.go:117] "RemoveContainer" containerID="30db52f103a940e12c4f31d4a915913a1aa105539d3c21da162885ca46e4618f" Jan 23 06:57:49 crc kubenswrapper[4937]: E0123 06:57:49.713876 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30db52f103a940e12c4f31d4a915913a1aa105539d3c21da162885ca46e4618f\": container with ID starting with 30db52f103a940e12c4f31d4a915913a1aa105539d3c21da162885ca46e4618f not found: ID does not exist" containerID="30db52f103a940e12c4f31d4a915913a1aa105539d3c21da162885ca46e4618f" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.713916 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30db52f103a940e12c4f31d4a915913a1aa105539d3c21da162885ca46e4618f"} err="failed to get container status \"30db52f103a940e12c4f31d4a915913a1aa105539d3c21da162885ca46e4618f\": rpc error: code = NotFound desc = could not find container \"30db52f103a940e12c4f31d4a915913a1aa105539d3c21da162885ca46e4618f\": container with ID starting with 30db52f103a940e12c4f31d4a915913a1aa105539d3c21da162885ca46e4618f not found: ID does not exist" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.713940 4937 scope.go:117] "RemoveContainer" containerID="a7e9306a302e9bf5ffb0691f5458acaefe94468bb3d34107c55c6a5602bc9064" Jan 23 06:57:49 crc kubenswrapper[4937]: E0123 06:57:49.714259 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7e9306a302e9bf5ffb0691f5458acaefe94468bb3d34107c55c6a5602bc9064\": container with ID starting with a7e9306a302e9bf5ffb0691f5458acaefe94468bb3d34107c55c6a5602bc9064 not found: ID does not exist" containerID="a7e9306a302e9bf5ffb0691f5458acaefe94468bb3d34107c55c6a5602bc9064" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.714297 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e9306a302e9bf5ffb0691f5458acaefe94468bb3d34107c55c6a5602bc9064"} err="failed to get container status \"a7e9306a302e9bf5ffb0691f5458acaefe94468bb3d34107c55c6a5602bc9064\": rpc error: code = NotFound desc = could not find container \"a7e9306a302e9bf5ffb0691f5458acaefe94468bb3d34107c55c6a5602bc9064\": container with ID starting with a7e9306a302e9bf5ffb0691f5458acaefe94468bb3d34107c55c6a5602bc9064 not found: ID does not exist" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.714314 4937 scope.go:117] "RemoveContainer" containerID="909dc20239e5d83f0a83f6d5c33ed13b3a02b383cb323e048bdac35639476fca" Jan 23 06:57:49 crc kubenswrapper[4937]: E0123 06:57:49.715169 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"909dc20239e5d83f0a83f6d5c33ed13b3a02b383cb323e048bdac35639476fca\": container with ID starting with 909dc20239e5d83f0a83f6d5c33ed13b3a02b383cb323e048bdac35639476fca not found: ID does not exist" containerID="909dc20239e5d83f0a83f6d5c33ed13b3a02b383cb323e048bdac35639476fca" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.715221 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"909dc20239e5d83f0a83f6d5c33ed13b3a02b383cb323e048bdac35639476fca"} err="failed to get container status \"909dc20239e5d83f0a83f6d5c33ed13b3a02b383cb323e048bdac35639476fca\": rpc error: code = NotFound desc = could not find container \"909dc20239e5d83f0a83f6d5c33ed13b3a02b383cb323e048bdac35639476fca\": container with ID starting with 909dc20239e5d83f0a83f6d5c33ed13b3a02b383cb323e048bdac35639476fca not found: ID does not exist" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.777923 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.778008 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.778064 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-config-data\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.778087 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.778169 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4p5q\" (UniqueName: \"kubernetes.io/projected/16893128-887d-469d-bc0d-63ece496c27d-kube-api-access-s4p5q\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.778239 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-scripts\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.778260 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16893128-887d-469d-bc0d-63ece496c27d-run-httpd\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.778307 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16893128-887d-469d-bc0d-63ece496c27d-log-httpd\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.779228 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16893128-887d-469d-bc0d-63ece496c27d-log-httpd\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.780230 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16893128-887d-469d-bc0d-63ece496c27d-run-httpd\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.781857 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.783033 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.784401 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-scripts\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.784765 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.788548 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-config-data\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.794634 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4p5q\" (UniqueName: \"kubernetes.io/projected/16893128-887d-469d-bc0d-63ece496c27d-kube-api-access-s4p5q\") pod \"ceilometer-0\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " pod="openstack/ceilometer-0" Jan 23 06:57:49 crc kubenswrapper[4937]: I0123 06:57:49.886817 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:57:50 crc kubenswrapper[4937]: I0123 06:57:50.190456 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:57:50 crc kubenswrapper[4937]: I0123 06:57:50.340642 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:57:50 crc kubenswrapper[4937]: I0123 06:57:50.381499 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:57:50 crc kubenswrapper[4937]: I0123 06:57:50.427933 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16893128-887d-469d-bc0d-63ece496c27d","Type":"ContainerStarted","Data":"c6ea9a457471807502d179804956cca701ad24544d8813ad1f54ccf52b6de5ef"} Jan 23 06:57:50 crc kubenswrapper[4937]: I0123 06:57:50.429744 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" event={"ID":"242fc90c-f61c-4f85-935a-17e7f81e2026","Type":"ContainerStarted","Data":"b3a70a92824af4310b467ee519fdf58a87504d9400ca7494962bd9e3541b3d84"} Jan 23 06:57:50 crc kubenswrapper[4937]: I0123 06:57:50.429825 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:50 crc kubenswrapper[4937]: I0123 06:57:50.431467 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d2911e95-df39-40f3-932c-e2dcbe84491a" containerName="nova-api-log" containerID="cri-o://1b0b486f3c41da6be83c8fdcbb62f9085fa7d1129a384ca4169e2da214c2a2df" gracePeriod=30 Jan 23 06:57:50 crc kubenswrapper[4937]: I0123 06:57:50.431611 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d2911e95-df39-40f3-932c-e2dcbe84491a" containerName="nova-api-api" containerID="cri-o://3eb1826dcf9588d3425ffdfdb7c83514bc17a6870ac6a4c206fb5768560409a2" gracePeriod=30 Jan 23 06:57:50 crc kubenswrapper[4937]: I0123 06:57:50.464045 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" podStartSLOduration=3.464024469 podStartE2EDuration="3.464024469s" podCreationTimestamp="2026-01-23 06:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:57:50.451992963 +0000 UTC m=+1470.255759616" watchObservedRunningTime="2026-01-23 06:57:50.464024469 +0000 UTC m=+1470.267791122" Jan 23 06:57:50 crc kubenswrapper[4937]: I0123 06:57:50.547659 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ddce763-a91f-4529-b73a-262f9d020da1" path="/var/lib/kubelet/pods/5ddce763-a91f-4529-b73a-262f9d020da1/volumes" Jan 23 06:57:51 crc kubenswrapper[4937]: I0123 06:57:51.441914 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16893128-887d-469d-bc0d-63ece496c27d","Type":"ContainerStarted","Data":"d03a430ac1d6054080396daa3f12929a20699e7403fd63cf3a4f8cf12c63bc53"} Jan 23 06:57:51 crc kubenswrapper[4937]: I0123 06:57:51.442234 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16893128-887d-469d-bc0d-63ece496c27d","Type":"ContainerStarted","Data":"79f9d8a36dfd5d1670c61ba0e39645bb17ab85a1574f66417454105a50900d3f"} Jan 23 06:57:51 crc kubenswrapper[4937]: I0123 06:57:51.444154 4937 generic.go:334] "Generic (PLEG): container finished" podID="d2911e95-df39-40f3-932c-e2dcbe84491a" containerID="1b0b486f3c41da6be83c8fdcbb62f9085fa7d1129a384ca4169e2da214c2a2df" exitCode=143 Jan 23 06:57:51 crc kubenswrapper[4937]: I0123 06:57:51.444214 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d2911e95-df39-40f3-932c-e2dcbe84491a","Type":"ContainerDied","Data":"1b0b486f3c41da6be83c8fdcbb62f9085fa7d1129a384ca4169e2da214c2a2df"} Jan 23 06:57:51 crc kubenswrapper[4937]: I0123 06:57:51.774573 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 06:57:52 crc kubenswrapper[4937]: I0123 06:57:52.452855 4937 generic.go:334] "Generic (PLEG): container finished" podID="d2911e95-df39-40f3-932c-e2dcbe84491a" containerID="3eb1826dcf9588d3425ffdfdb7c83514bc17a6870ac6a4c206fb5768560409a2" exitCode=0 Jan 23 06:57:52 crc kubenswrapper[4937]: I0123 06:57:52.452917 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d2911e95-df39-40f3-932c-e2dcbe84491a","Type":"ContainerDied","Data":"3eb1826dcf9588d3425ffdfdb7c83514bc17a6870ac6a4c206fb5768560409a2"} Jan 23 06:57:52 crc kubenswrapper[4937]: I0123 06:57:52.454885 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16893128-887d-469d-bc0d-63ece496c27d","Type":"ContainerStarted","Data":"8b372671f2ffe489402d38c38c1f44bd063093062b4cd3d6ce2fbb22b190806d"} Jan 23 06:57:52 crc kubenswrapper[4937]: I0123 06:57:52.891701 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:57:52 crc kubenswrapper[4937]: I0123 06:57:52.961582 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hfqk\" (UniqueName: \"kubernetes.io/projected/d2911e95-df39-40f3-932c-e2dcbe84491a-kube-api-access-2hfqk\") pod \"d2911e95-df39-40f3-932c-e2dcbe84491a\" (UID: \"d2911e95-df39-40f3-932c-e2dcbe84491a\") " Jan 23 06:57:52 crc kubenswrapper[4937]: I0123 06:57:52.961679 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2911e95-df39-40f3-932c-e2dcbe84491a-config-data\") pod \"d2911e95-df39-40f3-932c-e2dcbe84491a\" (UID: \"d2911e95-df39-40f3-932c-e2dcbe84491a\") " Jan 23 06:57:52 crc kubenswrapper[4937]: I0123 06:57:52.961860 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2911e95-df39-40f3-932c-e2dcbe84491a-logs\") pod \"d2911e95-df39-40f3-932c-e2dcbe84491a\" (UID: \"d2911e95-df39-40f3-932c-e2dcbe84491a\") " Jan 23 06:57:52 crc kubenswrapper[4937]: I0123 06:57:52.961942 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2911e95-df39-40f3-932c-e2dcbe84491a-combined-ca-bundle\") pod \"d2911e95-df39-40f3-932c-e2dcbe84491a\" (UID: \"d2911e95-df39-40f3-932c-e2dcbe84491a\") " Jan 23 06:57:52 crc kubenswrapper[4937]: I0123 06:57:52.964343 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2911e95-df39-40f3-932c-e2dcbe84491a-logs" (OuterVolumeSpecName: "logs") pod "d2911e95-df39-40f3-932c-e2dcbe84491a" (UID: "d2911e95-df39-40f3-932c-e2dcbe84491a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:57:52 crc kubenswrapper[4937]: I0123 06:57:52.973512 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2911e95-df39-40f3-932c-e2dcbe84491a-kube-api-access-2hfqk" (OuterVolumeSpecName: "kube-api-access-2hfqk") pod "d2911e95-df39-40f3-932c-e2dcbe84491a" (UID: "d2911e95-df39-40f3-932c-e2dcbe84491a"). InnerVolumeSpecName "kube-api-access-2hfqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.034152 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2911e95-df39-40f3-932c-e2dcbe84491a-config-data" (OuterVolumeSpecName: "config-data") pod "d2911e95-df39-40f3-932c-e2dcbe84491a" (UID: "d2911e95-df39-40f3-932c-e2dcbe84491a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.057440 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2911e95-df39-40f3-932c-e2dcbe84491a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2911e95-df39-40f3-932c-e2dcbe84491a" (UID: "d2911e95-df39-40f3-932c-e2dcbe84491a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.065743 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hfqk\" (UniqueName: \"kubernetes.io/projected/d2911e95-df39-40f3-932c-e2dcbe84491a-kube-api-access-2hfqk\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.065782 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2911e95-df39-40f3-932c-e2dcbe84491a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.065795 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2911e95-df39-40f3-932c-e2dcbe84491a-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.065806 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2911e95-df39-40f3-932c-e2dcbe84491a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.074644 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.475499 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d2911e95-df39-40f3-932c-e2dcbe84491a","Type":"ContainerDied","Data":"e31ca2a87b6843c2a20fe3d6af08be10afaf8bb64c90439fd6400fb6053cff66"} Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.475573 4937 scope.go:117] "RemoveContainer" containerID="3eb1826dcf9588d3425ffdfdb7c83514bc17a6870ac6a4c206fb5768560409a2" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.475521 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.483984 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16893128-887d-469d-bc0d-63ece496c27d","Type":"ContainerStarted","Data":"a67b800b1516ff3d0aec53f74c6b86058a205831371ad0d69993e0ed0520058d"} Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.484154 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16893128-887d-469d-bc0d-63ece496c27d" containerName="ceilometer-central-agent" containerID="cri-o://79f9d8a36dfd5d1670c61ba0e39645bb17ab85a1574f66417454105a50900d3f" gracePeriod=30 Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.484355 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.484573 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16893128-887d-469d-bc0d-63ece496c27d" containerName="sg-core" containerID="cri-o://8b372671f2ffe489402d38c38c1f44bd063093062b4cd3d6ce2fbb22b190806d" gracePeriod=30 Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.484689 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16893128-887d-469d-bc0d-63ece496c27d" containerName="ceilometer-notification-agent" containerID="cri-o://d03a430ac1d6054080396daa3f12929a20699e7403fd63cf3a4f8cf12c63bc53" gracePeriod=30 Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.484555 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16893128-887d-469d-bc0d-63ece496c27d" containerName="proxy-httpd" containerID="cri-o://a67b800b1516ff3d0aec53f74c6b86058a205831371ad0d69993e0ed0520058d" gracePeriod=30 Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.502356 4937 scope.go:117] "RemoveContainer" containerID="1b0b486f3c41da6be83c8fdcbb62f9085fa7d1129a384ca4169e2da214c2a2df" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.538239 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.212799583 podStartE2EDuration="4.538219286s" podCreationTimestamp="2026-01-23 06:57:49 +0000 UTC" firstStartedPulling="2026-01-23 06:57:50.378003466 +0000 UTC m=+1470.181770119" lastFinishedPulling="2026-01-23 06:57:52.703423169 +0000 UTC m=+1472.507189822" observedRunningTime="2026-01-23 06:57:53.520249508 +0000 UTC m=+1473.324016161" watchObservedRunningTime="2026-01-23 06:57:53.538219286 +0000 UTC m=+1473.341985939" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.544898 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.555621 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.569905 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 06:57:53 crc kubenswrapper[4937]: E0123 06:57:53.570528 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2911e95-df39-40f3-932c-e2dcbe84491a" containerName="nova-api-log" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.570605 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2911e95-df39-40f3-932c-e2dcbe84491a" containerName="nova-api-log" Jan 23 06:57:53 crc kubenswrapper[4937]: E0123 06:57:53.570669 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2911e95-df39-40f3-932c-e2dcbe84491a" containerName="nova-api-api" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.570716 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2911e95-df39-40f3-932c-e2dcbe84491a" containerName="nova-api-api" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.571003 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2911e95-df39-40f3-932c-e2dcbe84491a" containerName="nova-api-api" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.571100 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2911e95-df39-40f3-932c-e2dcbe84491a" containerName="nova-api-log" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.572310 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.578803 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.579714 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.579801 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.579944 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.676197 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.676290 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-config-data\") pod \"nova-api-0\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.676338 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-public-tls-certs\") pod \"nova-api-0\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.676416 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlcpt\" (UniqueName: \"kubernetes.io/projected/462daef1-6c27-4ee2-abb2-e8f331256b3c-kube-api-access-nlcpt\") pod \"nova-api-0\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.676491 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/462daef1-6c27-4ee2-abb2-e8f331256b3c-logs\") pod \"nova-api-0\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.676526 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.778603 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/462daef1-6c27-4ee2-abb2-e8f331256b3c-logs\") pod \"nova-api-0\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.778676 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.778764 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.778795 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-config-data\") pod \"nova-api-0\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.778819 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-public-tls-certs\") pod \"nova-api-0\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.778879 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlcpt\" (UniqueName: \"kubernetes.io/projected/462daef1-6c27-4ee2-abb2-e8f331256b3c-kube-api-access-nlcpt\") pod \"nova-api-0\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.779214 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/462daef1-6c27-4ee2-abb2-e8f331256b3c-logs\") pod \"nova-api-0\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.783948 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-public-tls-certs\") pod \"nova-api-0\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.784315 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.786417 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-config-data\") pod \"nova-api-0\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.788204 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.800201 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlcpt\" (UniqueName: \"kubernetes.io/projected/462daef1-6c27-4ee2-abb2-e8f331256b3c-kube-api-access-nlcpt\") pod \"nova-api-0\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " pod="openstack/nova-api-0" Jan 23 06:57:53 crc kubenswrapper[4937]: I0123 06:57:53.888790 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:57:54 crc kubenswrapper[4937]: W0123 06:57:54.400541 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod462daef1_6c27_4ee2_abb2_e8f331256b3c.slice/crio-506d0203407a4224010b4c52ca92960fb4f9b52e089bc206dbfa92a762cdd4b4 WatchSource:0}: Error finding container 506d0203407a4224010b4c52ca92960fb4f9b52e089bc206dbfa92a762cdd4b4: Status 404 returned error can't find the container with id 506d0203407a4224010b4c52ca92960fb4f9b52e089bc206dbfa92a762cdd4b4 Jan 23 06:57:54 crc kubenswrapper[4937]: I0123 06:57:54.403888 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:57:54 crc kubenswrapper[4937]: I0123 06:57:54.509868 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"462daef1-6c27-4ee2-abb2-e8f331256b3c","Type":"ContainerStarted","Data":"506d0203407a4224010b4c52ca92960fb4f9b52e089bc206dbfa92a762cdd4b4"} Jan 23 06:57:54 crc kubenswrapper[4937]: I0123 06:57:54.514002 4937 generic.go:334] "Generic (PLEG): container finished" podID="16893128-887d-469d-bc0d-63ece496c27d" containerID="a67b800b1516ff3d0aec53f74c6b86058a205831371ad0d69993e0ed0520058d" exitCode=0 Jan 23 06:57:54 crc kubenswrapper[4937]: I0123 06:57:54.514021 4937 generic.go:334] "Generic (PLEG): container finished" podID="16893128-887d-469d-bc0d-63ece496c27d" containerID="8b372671f2ffe489402d38c38c1f44bd063093062b4cd3d6ce2fbb22b190806d" exitCode=2 Jan 23 06:57:54 crc kubenswrapper[4937]: I0123 06:57:54.514029 4937 generic.go:334] "Generic (PLEG): container finished" podID="16893128-887d-469d-bc0d-63ece496c27d" containerID="d03a430ac1d6054080396daa3f12929a20699e7403fd63cf3a4f8cf12c63bc53" exitCode=0 Jan 23 06:57:54 crc kubenswrapper[4937]: I0123 06:57:54.514045 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16893128-887d-469d-bc0d-63ece496c27d","Type":"ContainerDied","Data":"a67b800b1516ff3d0aec53f74c6b86058a205831371ad0d69993e0ed0520058d"} Jan 23 06:57:54 crc kubenswrapper[4937]: I0123 06:57:54.514064 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16893128-887d-469d-bc0d-63ece496c27d","Type":"ContainerDied","Data":"8b372671f2ffe489402d38c38c1f44bd063093062b4cd3d6ce2fbb22b190806d"} Jan 23 06:57:54 crc kubenswrapper[4937]: I0123 06:57:54.514073 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16893128-887d-469d-bc0d-63ece496c27d","Type":"ContainerDied","Data":"d03a430ac1d6054080396daa3f12929a20699e7403fd63cf3a4f8cf12c63bc53"} Jan 23 06:57:54 crc kubenswrapper[4937]: I0123 06:57:54.537910 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2911e95-df39-40f3-932c-e2dcbe84491a" path="/var/lib/kubelet/pods/d2911e95-df39-40f3-932c-e2dcbe84491a/volumes" Jan 23 06:57:55 crc kubenswrapper[4937]: I0123 06:57:55.525813 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"462daef1-6c27-4ee2-abb2-e8f331256b3c","Type":"ContainerStarted","Data":"9e6542c6d3aec2c87b303ccdef6aa246254424e0ae17450cc8f19a760181d02c"} Jan 23 06:57:55 crc kubenswrapper[4937]: I0123 06:57:55.525861 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"462daef1-6c27-4ee2-abb2-e8f331256b3c","Type":"ContainerStarted","Data":"b342b8a7c83e941197a14b5a09ff80e18622c9efca2769a9db5432f3b11a6ecc"} Jan 23 06:57:55 crc kubenswrapper[4937]: I0123 06:57:55.559554 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.559532768 podStartE2EDuration="2.559532768s" podCreationTimestamp="2026-01-23 06:57:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:57:55.546633479 +0000 UTC m=+1475.350400142" watchObservedRunningTime="2026-01-23 06:57:55.559532768 +0000 UTC m=+1475.363299441" Jan 23 06:57:57 crc kubenswrapper[4937]: I0123 06:57:57.262814 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vr4mv" Jan 23 06:57:57 crc kubenswrapper[4937]: I0123 06:57:57.317044 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vr4mv" Jan 23 06:57:57 crc kubenswrapper[4937]: I0123 06:57:57.510483 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vr4mv"] Jan 23 06:57:57 crc kubenswrapper[4937]: I0123 06:57:57.939764 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.010143 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745797d8cc-znx7z"] Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.010373 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-745797d8cc-znx7z" podUID="5302d833-0e59-4aeb-8699-b782840b9fee" containerName="dnsmasq-dns" containerID="cri-o://24e8a57ae780a8aa2432ead1c8f28af7b08c0c55994c7f284c95927050aab1e6" gracePeriod=10 Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.074343 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.098559 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.513738 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.561093 4937 generic.go:334] "Generic (PLEG): container finished" podID="5302d833-0e59-4aeb-8699-b782840b9fee" containerID="24e8a57ae780a8aa2432ead1c8f28af7b08c0c55994c7f284c95927050aab1e6" exitCode=0 Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.561393 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745797d8cc-znx7z" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.562129 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vr4mv" podUID="222dfe15-3dc2-4157-9648-9f64e79dcf1c" containerName="registry-server" containerID="cri-o://965994ae143ba897cec2fcb8dc27bf5e9b5bf3c7f6cf481bcdce95c7efa032a2" gracePeriod=2 Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.574176 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-config\") pod \"5302d833-0e59-4aeb-8699-b782840b9fee\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.574290 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-ovsdbserver-nb\") pod \"5302d833-0e59-4aeb-8699-b782840b9fee\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.574325 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-dns-svc\") pod \"5302d833-0e59-4aeb-8699-b782840b9fee\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.574368 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-dns-swift-storage-0\") pod \"5302d833-0e59-4aeb-8699-b782840b9fee\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.574422 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-ovsdbserver-sb\") pod \"5302d833-0e59-4aeb-8699-b782840b9fee\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.574546 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpr79\" (UniqueName: \"kubernetes.io/projected/5302d833-0e59-4aeb-8699-b782840b9fee-kube-api-access-fpr79\") pod \"5302d833-0e59-4aeb-8699-b782840b9fee\" (UID: \"5302d833-0e59-4aeb-8699-b782840b9fee\") " Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.575905 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745797d8cc-znx7z" event={"ID":"5302d833-0e59-4aeb-8699-b782840b9fee","Type":"ContainerDied","Data":"24e8a57ae780a8aa2432ead1c8f28af7b08c0c55994c7f284c95927050aab1e6"} Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.575947 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745797d8cc-znx7z" event={"ID":"5302d833-0e59-4aeb-8699-b782840b9fee","Type":"ContainerDied","Data":"589fdb5b6349b2e14e1120e52fcc59b164347d7d9f39abec09e78c4490f37a69"} Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.575965 4937 scope.go:117] "RemoveContainer" containerID="24e8a57ae780a8aa2432ead1c8f28af7b08c0c55994c7f284c95927050aab1e6" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.584283 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.584758 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5302d833-0e59-4aeb-8699-b782840b9fee-kube-api-access-fpr79" (OuterVolumeSpecName: "kube-api-access-fpr79") pod "5302d833-0e59-4aeb-8699-b782840b9fee" (UID: "5302d833-0e59-4aeb-8699-b782840b9fee"). InnerVolumeSpecName "kube-api-access-fpr79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.679359 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpr79\" (UniqueName: \"kubernetes.io/projected/5302d833-0e59-4aeb-8699-b782840b9fee-kube-api-access-fpr79\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.704131 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5302d833-0e59-4aeb-8699-b782840b9fee" (UID: "5302d833-0e59-4aeb-8699-b782840b9fee"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.727683 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5302d833-0e59-4aeb-8699-b782840b9fee" (UID: "5302d833-0e59-4aeb-8699-b782840b9fee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.732635 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-config" (OuterVolumeSpecName: "config") pod "5302d833-0e59-4aeb-8699-b782840b9fee" (UID: "5302d833-0e59-4aeb-8699-b782840b9fee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.741866 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5302d833-0e59-4aeb-8699-b782840b9fee" (UID: "5302d833-0e59-4aeb-8699-b782840b9fee"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.747553 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5302d833-0e59-4aeb-8699-b782840b9fee" (UID: "5302d833-0e59-4aeb-8699-b782840b9fee"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.780979 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.781033 4937 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.781043 4937 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.781052 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.781067 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5302d833-0e59-4aeb-8699-b782840b9fee-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.826811 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-wp8f4"] Jan 23 06:57:58 crc kubenswrapper[4937]: E0123 06:57:58.827239 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5302d833-0e59-4aeb-8699-b782840b9fee" containerName="dnsmasq-dns" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.827258 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5302d833-0e59-4aeb-8699-b782840b9fee" containerName="dnsmasq-dns" Jan 23 06:57:58 crc kubenswrapper[4937]: E0123 06:57:58.827306 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5302d833-0e59-4aeb-8699-b782840b9fee" containerName="init" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.827313 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5302d833-0e59-4aeb-8699-b782840b9fee" containerName="init" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.827496 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="5302d833-0e59-4aeb-8699-b782840b9fee" containerName="dnsmasq-dns" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.828148 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wp8f4" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.830268 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.831247 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.842808 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wp8f4"] Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.846267 4937 scope.go:117] "RemoveContainer" containerID="c7d6743c67db057c807c1e90503606145fe29ef6c50674f76b03e3712d39578d" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.873583 4937 scope.go:117] "RemoveContainer" containerID="24e8a57ae780a8aa2432ead1c8f28af7b08c0c55994c7f284c95927050aab1e6" Jan 23 06:57:58 crc kubenswrapper[4937]: E0123 06:57:58.874496 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24e8a57ae780a8aa2432ead1c8f28af7b08c0c55994c7f284c95927050aab1e6\": container with ID starting with 24e8a57ae780a8aa2432ead1c8f28af7b08c0c55994c7f284c95927050aab1e6 not found: ID does not exist" containerID="24e8a57ae780a8aa2432ead1c8f28af7b08c0c55994c7f284c95927050aab1e6" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.874561 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24e8a57ae780a8aa2432ead1c8f28af7b08c0c55994c7f284c95927050aab1e6"} err="failed to get container status \"24e8a57ae780a8aa2432ead1c8f28af7b08c0c55994c7f284c95927050aab1e6\": rpc error: code = NotFound desc = could not find container \"24e8a57ae780a8aa2432ead1c8f28af7b08c0c55994c7f284c95927050aab1e6\": container with ID starting with 24e8a57ae780a8aa2432ead1c8f28af7b08c0c55994c7f284c95927050aab1e6 not found: ID does not exist" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.874600 4937 scope.go:117] "RemoveContainer" containerID="c7d6743c67db057c807c1e90503606145fe29ef6c50674f76b03e3712d39578d" Jan 23 06:57:58 crc kubenswrapper[4937]: E0123 06:57:58.874991 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7d6743c67db057c807c1e90503606145fe29ef6c50674f76b03e3712d39578d\": container with ID starting with c7d6743c67db057c807c1e90503606145fe29ef6c50674f76b03e3712d39578d not found: ID does not exist" containerID="c7d6743c67db057c807c1e90503606145fe29ef6c50674f76b03e3712d39578d" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.875033 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7d6743c67db057c807c1e90503606145fe29ef6c50674f76b03e3712d39578d"} err="failed to get container status \"c7d6743c67db057c807c1e90503606145fe29ef6c50674f76b03e3712d39578d\": rpc error: code = NotFound desc = could not find container \"c7d6743c67db057c807c1e90503606145fe29ef6c50674f76b03e3712d39578d\": container with ID starting with c7d6743c67db057c807c1e90503606145fe29ef6c50674f76b03e3712d39578d not found: ID does not exist" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.885897 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495ea696-2703-4613-ac3f-390fce312e4f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wp8f4\" (UID: \"495ea696-2703-4613-ac3f-390fce312e4f\") " pod="openstack/nova-cell1-cell-mapping-wp8f4" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.886237 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/495ea696-2703-4613-ac3f-390fce312e4f-scripts\") pod \"nova-cell1-cell-mapping-wp8f4\" (UID: \"495ea696-2703-4613-ac3f-390fce312e4f\") " pod="openstack/nova-cell1-cell-mapping-wp8f4" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.886342 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/495ea696-2703-4613-ac3f-390fce312e4f-config-data\") pod \"nova-cell1-cell-mapping-wp8f4\" (UID: \"495ea696-2703-4613-ac3f-390fce312e4f\") " pod="openstack/nova-cell1-cell-mapping-wp8f4" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.886807 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z8px\" (UniqueName: \"kubernetes.io/projected/495ea696-2703-4613-ac3f-390fce312e4f-kube-api-access-2z8px\") pod \"nova-cell1-cell-mapping-wp8f4\" (UID: \"495ea696-2703-4613-ac3f-390fce312e4f\") " pod="openstack/nova-cell1-cell-mapping-wp8f4" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.905199 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745797d8cc-znx7z"] Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.914284 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-745797d8cc-znx7z"] Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.988518 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495ea696-2703-4613-ac3f-390fce312e4f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wp8f4\" (UID: \"495ea696-2703-4613-ac3f-390fce312e4f\") " pod="openstack/nova-cell1-cell-mapping-wp8f4" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.988623 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/495ea696-2703-4613-ac3f-390fce312e4f-scripts\") pod \"nova-cell1-cell-mapping-wp8f4\" (UID: \"495ea696-2703-4613-ac3f-390fce312e4f\") " pod="openstack/nova-cell1-cell-mapping-wp8f4" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.988647 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/495ea696-2703-4613-ac3f-390fce312e4f-config-data\") pod \"nova-cell1-cell-mapping-wp8f4\" (UID: \"495ea696-2703-4613-ac3f-390fce312e4f\") " pod="openstack/nova-cell1-cell-mapping-wp8f4" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.988701 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z8px\" (UniqueName: \"kubernetes.io/projected/495ea696-2703-4613-ac3f-390fce312e4f-kube-api-access-2z8px\") pod \"nova-cell1-cell-mapping-wp8f4\" (UID: \"495ea696-2703-4613-ac3f-390fce312e4f\") " pod="openstack/nova-cell1-cell-mapping-wp8f4" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.992749 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/495ea696-2703-4613-ac3f-390fce312e4f-config-data\") pod \"nova-cell1-cell-mapping-wp8f4\" (UID: \"495ea696-2703-4613-ac3f-390fce312e4f\") " pod="openstack/nova-cell1-cell-mapping-wp8f4" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.993566 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/495ea696-2703-4613-ac3f-390fce312e4f-scripts\") pod \"nova-cell1-cell-mapping-wp8f4\" (UID: \"495ea696-2703-4613-ac3f-390fce312e4f\") " pod="openstack/nova-cell1-cell-mapping-wp8f4" Jan 23 06:57:58 crc kubenswrapper[4937]: I0123 06:57:58.996756 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495ea696-2703-4613-ac3f-390fce312e4f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wp8f4\" (UID: \"495ea696-2703-4613-ac3f-390fce312e4f\") " pod="openstack/nova-cell1-cell-mapping-wp8f4" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.003835 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z8px\" (UniqueName: \"kubernetes.io/projected/495ea696-2703-4613-ac3f-390fce312e4f-kube-api-access-2z8px\") pod \"nova-cell1-cell-mapping-wp8f4\" (UID: \"495ea696-2703-4613-ac3f-390fce312e4f\") " pod="openstack/nova-cell1-cell-mapping-wp8f4" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.061491 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.091154 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-config-data\") pod \"16893128-887d-469d-bc0d-63ece496c27d\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.091321 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16893128-887d-469d-bc0d-63ece496c27d-log-httpd\") pod \"16893128-887d-469d-bc0d-63ece496c27d\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.091359 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4p5q\" (UniqueName: \"kubernetes.io/projected/16893128-887d-469d-bc0d-63ece496c27d-kube-api-access-s4p5q\") pod \"16893128-887d-469d-bc0d-63ece496c27d\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.091388 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-combined-ca-bundle\") pod \"16893128-887d-469d-bc0d-63ece496c27d\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.091463 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16893128-887d-469d-bc0d-63ece496c27d-run-httpd\") pod \"16893128-887d-469d-bc0d-63ece496c27d\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.091519 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-scripts\") pod \"16893128-887d-469d-bc0d-63ece496c27d\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.091551 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-ceilometer-tls-certs\") pod \"16893128-887d-469d-bc0d-63ece496c27d\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.091630 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-sg-core-conf-yaml\") pod \"16893128-887d-469d-bc0d-63ece496c27d\" (UID: \"16893128-887d-469d-bc0d-63ece496c27d\") " Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.106253 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16893128-887d-469d-bc0d-63ece496c27d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "16893128-887d-469d-bc0d-63ece496c27d" (UID: "16893128-887d-469d-bc0d-63ece496c27d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.106960 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16893128-887d-469d-bc0d-63ece496c27d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "16893128-887d-469d-bc0d-63ece496c27d" (UID: "16893128-887d-469d-bc0d-63ece496c27d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.108949 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-scripts" (OuterVolumeSpecName: "scripts") pod "16893128-887d-469d-bc0d-63ece496c27d" (UID: "16893128-887d-469d-bc0d-63ece496c27d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.109679 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16893128-887d-469d-bc0d-63ece496c27d-kube-api-access-s4p5q" (OuterVolumeSpecName: "kube-api-access-s4p5q") pod "16893128-887d-469d-bc0d-63ece496c27d" (UID: "16893128-887d-469d-bc0d-63ece496c27d"). InnerVolumeSpecName "kube-api-access-s4p5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.141072 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "16893128-887d-469d-bc0d-63ece496c27d" (UID: "16893128-887d-469d-bc0d-63ece496c27d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.156368 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wp8f4" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.193899 4937 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16893128-887d-469d-bc0d-63ece496c27d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.193928 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4p5q\" (UniqueName: \"kubernetes.io/projected/16893128-887d-469d-bc0d-63ece496c27d-kube-api-access-s4p5q\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.193962 4937 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16893128-887d-469d-bc0d-63ece496c27d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.193972 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.193979 4937 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.200163 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16893128-887d-469d-bc0d-63ece496c27d" (UID: "16893128-887d-469d-bc0d-63ece496c27d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.205236 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "16893128-887d-469d-bc0d-63ece496c27d" (UID: "16893128-887d-469d-bc0d-63ece496c27d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.223747 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-config-data" (OuterVolumeSpecName: "config-data") pod "16893128-887d-469d-bc0d-63ece496c27d" (UID: "16893128-887d-469d-bc0d-63ece496c27d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.298637 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.298669 4937 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.298681 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16893128-887d-469d-bc0d-63ece496c27d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.323491 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vr4mv" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.400204 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/222dfe15-3dc2-4157-9648-9f64e79dcf1c-utilities\") pod \"222dfe15-3dc2-4157-9648-9f64e79dcf1c\" (UID: \"222dfe15-3dc2-4157-9648-9f64e79dcf1c\") " Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.400681 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25b55\" (UniqueName: \"kubernetes.io/projected/222dfe15-3dc2-4157-9648-9f64e79dcf1c-kube-api-access-25b55\") pod \"222dfe15-3dc2-4157-9648-9f64e79dcf1c\" (UID: \"222dfe15-3dc2-4157-9648-9f64e79dcf1c\") " Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.400809 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/222dfe15-3dc2-4157-9648-9f64e79dcf1c-catalog-content\") pod \"222dfe15-3dc2-4157-9648-9f64e79dcf1c\" (UID: \"222dfe15-3dc2-4157-9648-9f64e79dcf1c\") " Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.401123 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/222dfe15-3dc2-4157-9648-9f64e79dcf1c-utilities" (OuterVolumeSpecName: "utilities") pod "222dfe15-3dc2-4157-9648-9f64e79dcf1c" (UID: "222dfe15-3dc2-4157-9648-9f64e79dcf1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.401693 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/222dfe15-3dc2-4157-9648-9f64e79dcf1c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.407356 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/222dfe15-3dc2-4157-9648-9f64e79dcf1c-kube-api-access-25b55" (OuterVolumeSpecName: "kube-api-access-25b55") pod "222dfe15-3dc2-4157-9648-9f64e79dcf1c" (UID: "222dfe15-3dc2-4157-9648-9f64e79dcf1c"). InnerVolumeSpecName "kube-api-access-25b55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.503692 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25b55\" (UniqueName: \"kubernetes.io/projected/222dfe15-3dc2-4157-9648-9f64e79dcf1c-kube-api-access-25b55\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.527033 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/222dfe15-3dc2-4157-9648-9f64e79dcf1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "222dfe15-3dc2-4157-9648-9f64e79dcf1c" (UID: "222dfe15-3dc2-4157-9648-9f64e79dcf1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.578707 4937 generic.go:334] "Generic (PLEG): container finished" podID="16893128-887d-469d-bc0d-63ece496c27d" containerID="79f9d8a36dfd5d1670c61ba0e39645bb17ab85a1574f66417454105a50900d3f" exitCode=0 Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.578786 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16893128-887d-469d-bc0d-63ece496c27d","Type":"ContainerDied","Data":"79f9d8a36dfd5d1670c61ba0e39645bb17ab85a1574f66417454105a50900d3f"} Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.578821 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16893128-887d-469d-bc0d-63ece496c27d","Type":"ContainerDied","Data":"c6ea9a457471807502d179804956cca701ad24544d8813ad1f54ccf52b6de5ef"} Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.578841 4937 scope.go:117] "RemoveContainer" containerID="a67b800b1516ff3d0aec53f74c6b86058a205831371ad0d69993e0ed0520058d" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.578976 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.586066 4937 generic.go:334] "Generic (PLEG): container finished" podID="222dfe15-3dc2-4157-9648-9f64e79dcf1c" containerID="965994ae143ba897cec2fcb8dc27bf5e9b5bf3c7f6cf481bcdce95c7efa032a2" exitCode=0 Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.586133 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vr4mv" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.586177 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vr4mv" event={"ID":"222dfe15-3dc2-4157-9648-9f64e79dcf1c","Type":"ContainerDied","Data":"965994ae143ba897cec2fcb8dc27bf5e9b5bf3c7f6cf481bcdce95c7efa032a2"} Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.586241 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vr4mv" event={"ID":"222dfe15-3dc2-4157-9648-9f64e79dcf1c","Type":"ContainerDied","Data":"79593d35e5abba4e73be16edd98266c85cfe3c401dfb9d45b4edf4e914171c42"} Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.605146 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/222dfe15-3dc2-4157-9648-9f64e79dcf1c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.608352 4937 scope.go:117] "RemoveContainer" containerID="8b372671f2ffe489402d38c38c1f44bd063093062b4cd3d6ce2fbb22b190806d" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.638863 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.641094 4937 scope.go:117] "RemoveContainer" containerID="d03a430ac1d6054080396daa3f12929a20699e7403fd63cf3a4f8cf12c63bc53" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.660421 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.671175 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vr4mv"] Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.679825 4937 scope.go:117] "RemoveContainer" containerID="79f9d8a36dfd5d1670c61ba0e39645bb17ab85a1574f66417454105a50900d3f" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.683819 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vr4mv"] Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.697292 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:57:59 crc kubenswrapper[4937]: E0123 06:57:59.697811 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="222dfe15-3dc2-4157-9648-9f64e79dcf1c" containerName="extract-content" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.697832 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="222dfe15-3dc2-4157-9648-9f64e79dcf1c" containerName="extract-content" Jan 23 06:57:59 crc kubenswrapper[4937]: E0123 06:57:59.697939 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16893128-887d-469d-bc0d-63ece496c27d" containerName="proxy-httpd" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.697955 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="16893128-887d-469d-bc0d-63ece496c27d" containerName="proxy-httpd" Jan 23 06:57:59 crc kubenswrapper[4937]: E0123 06:57:59.697977 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="222dfe15-3dc2-4157-9648-9f64e79dcf1c" containerName="extract-utilities" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.697985 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="222dfe15-3dc2-4157-9648-9f64e79dcf1c" containerName="extract-utilities" Jan 23 06:57:59 crc kubenswrapper[4937]: E0123 06:57:59.698001 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16893128-887d-469d-bc0d-63ece496c27d" containerName="ceilometer-central-agent" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.698009 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="16893128-887d-469d-bc0d-63ece496c27d" containerName="ceilometer-central-agent" Jan 23 06:57:59 crc kubenswrapper[4937]: E0123 06:57:59.698025 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16893128-887d-469d-bc0d-63ece496c27d" containerName="ceilometer-notification-agent" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.698034 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="16893128-887d-469d-bc0d-63ece496c27d" containerName="ceilometer-notification-agent" Jan 23 06:57:59 crc kubenswrapper[4937]: E0123 06:57:59.698045 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16893128-887d-469d-bc0d-63ece496c27d" containerName="sg-core" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.698052 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="16893128-887d-469d-bc0d-63ece496c27d" containerName="sg-core" Jan 23 06:57:59 crc kubenswrapper[4937]: E0123 06:57:59.698068 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="222dfe15-3dc2-4157-9648-9f64e79dcf1c" containerName="registry-server" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.698076 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="222dfe15-3dc2-4157-9648-9f64e79dcf1c" containerName="registry-server" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.698304 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="16893128-887d-469d-bc0d-63ece496c27d" containerName="proxy-httpd" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.698317 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="222dfe15-3dc2-4157-9648-9f64e79dcf1c" containerName="registry-server" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.698346 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="16893128-887d-469d-bc0d-63ece496c27d" containerName="ceilometer-notification-agent" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.698360 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="16893128-887d-469d-bc0d-63ece496c27d" containerName="ceilometer-central-agent" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.698376 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="16893128-887d-469d-bc0d-63ece496c27d" containerName="sg-core" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.701041 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.702208 4937 scope.go:117] "RemoveContainer" containerID="a67b800b1516ff3d0aec53f74c6b86058a205831371ad0d69993e0ed0520058d" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.703329 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.703612 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.703745 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 06:57:59 crc kubenswrapper[4937]: E0123 06:57:59.705272 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a67b800b1516ff3d0aec53f74c6b86058a205831371ad0d69993e0ed0520058d\": container with ID starting with a67b800b1516ff3d0aec53f74c6b86058a205831371ad0d69993e0ed0520058d not found: ID does not exist" containerID="a67b800b1516ff3d0aec53f74c6b86058a205831371ad0d69993e0ed0520058d" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.705311 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a67b800b1516ff3d0aec53f74c6b86058a205831371ad0d69993e0ed0520058d"} err="failed to get container status \"a67b800b1516ff3d0aec53f74c6b86058a205831371ad0d69993e0ed0520058d\": rpc error: code = NotFound desc = could not find container \"a67b800b1516ff3d0aec53f74c6b86058a205831371ad0d69993e0ed0520058d\": container with ID starting with a67b800b1516ff3d0aec53f74c6b86058a205831371ad0d69993e0ed0520058d not found: ID does not exist" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.705334 4937 scope.go:117] "RemoveContainer" containerID="8b372671f2ffe489402d38c38c1f44bd063093062b4cd3d6ce2fbb22b190806d" Jan 23 06:57:59 crc kubenswrapper[4937]: E0123 06:57:59.711737 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b372671f2ffe489402d38c38c1f44bd063093062b4cd3d6ce2fbb22b190806d\": container with ID starting with 8b372671f2ffe489402d38c38c1f44bd063093062b4cd3d6ce2fbb22b190806d not found: ID does not exist" containerID="8b372671f2ffe489402d38c38c1f44bd063093062b4cd3d6ce2fbb22b190806d" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.711778 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b372671f2ffe489402d38c38c1f44bd063093062b4cd3d6ce2fbb22b190806d"} err="failed to get container status \"8b372671f2ffe489402d38c38c1f44bd063093062b4cd3d6ce2fbb22b190806d\": rpc error: code = NotFound desc = could not find container \"8b372671f2ffe489402d38c38c1f44bd063093062b4cd3d6ce2fbb22b190806d\": container with ID starting with 8b372671f2ffe489402d38c38c1f44bd063093062b4cd3d6ce2fbb22b190806d not found: ID does not exist" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.711800 4937 scope.go:117] "RemoveContainer" containerID="d03a430ac1d6054080396daa3f12929a20699e7403fd63cf3a4f8cf12c63bc53" Jan 23 06:57:59 crc kubenswrapper[4937]: E0123 06:57:59.712572 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d03a430ac1d6054080396daa3f12929a20699e7403fd63cf3a4f8cf12c63bc53\": container with ID starting with d03a430ac1d6054080396daa3f12929a20699e7403fd63cf3a4f8cf12c63bc53 not found: ID does not exist" containerID="d03a430ac1d6054080396daa3f12929a20699e7403fd63cf3a4f8cf12c63bc53" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.712887 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d03a430ac1d6054080396daa3f12929a20699e7403fd63cf3a4f8cf12c63bc53"} err="failed to get container status \"d03a430ac1d6054080396daa3f12929a20699e7403fd63cf3a4f8cf12c63bc53\": rpc error: code = NotFound desc = could not find container \"d03a430ac1d6054080396daa3f12929a20699e7403fd63cf3a4f8cf12c63bc53\": container with ID starting with d03a430ac1d6054080396daa3f12929a20699e7403fd63cf3a4f8cf12c63bc53 not found: ID does not exist" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.712924 4937 scope.go:117] "RemoveContainer" containerID="79f9d8a36dfd5d1670c61ba0e39645bb17ab85a1574f66417454105a50900d3f" Jan 23 06:57:59 crc kubenswrapper[4937]: E0123 06:57:59.713202 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79f9d8a36dfd5d1670c61ba0e39645bb17ab85a1574f66417454105a50900d3f\": container with ID starting with 79f9d8a36dfd5d1670c61ba0e39645bb17ab85a1574f66417454105a50900d3f not found: ID does not exist" containerID="79f9d8a36dfd5d1670c61ba0e39645bb17ab85a1574f66417454105a50900d3f" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.713233 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79f9d8a36dfd5d1670c61ba0e39645bb17ab85a1574f66417454105a50900d3f"} err="failed to get container status \"79f9d8a36dfd5d1670c61ba0e39645bb17ab85a1574f66417454105a50900d3f\": rpc error: code = NotFound desc = could not find container \"79f9d8a36dfd5d1670c61ba0e39645bb17ab85a1574f66417454105a50900d3f\": container with ID starting with 79f9d8a36dfd5d1670c61ba0e39645bb17ab85a1574f66417454105a50900d3f not found: ID does not exist" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.713251 4937 scope.go:117] "RemoveContainer" containerID="965994ae143ba897cec2fcb8dc27bf5e9b5bf3c7f6cf481bcdce95c7efa032a2" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.719053 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.734291 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wp8f4"] Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.801873 4937 scope.go:117] "RemoveContainer" containerID="4edaecd0d60cb4f5ab9cde312fbf2eb1fc9dc6393654ae7122d6b1f22e32b1e2" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.812973 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab05357-8ea2-47be-96c5-641abf53afe0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.813043 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72cxd\" (UniqueName: \"kubernetes.io/projected/5ab05357-8ea2-47be-96c5-641abf53afe0-kube-api-access-72cxd\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.813105 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ab05357-8ea2-47be-96c5-641abf53afe0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.813145 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ab05357-8ea2-47be-96c5-641abf53afe0-run-httpd\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.813167 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab05357-8ea2-47be-96c5-641abf53afe0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.813194 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ab05357-8ea2-47be-96c5-641abf53afe0-scripts\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.813247 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ab05357-8ea2-47be-96c5-641abf53afe0-log-httpd\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.813369 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ab05357-8ea2-47be-96c5-641abf53afe0-config-data\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.869860 4937 scope.go:117] "RemoveContainer" containerID="87336ba32679678615c9d0cfc009f4c4538ec0ce6dc0c9349a07d6496f47a05b" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.914610 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab05357-8ea2-47be-96c5-641abf53afe0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.914832 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72cxd\" (UniqueName: \"kubernetes.io/projected/5ab05357-8ea2-47be-96c5-641abf53afe0-kube-api-access-72cxd\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.914861 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ab05357-8ea2-47be-96c5-641abf53afe0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.914885 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ab05357-8ea2-47be-96c5-641abf53afe0-run-httpd\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.914901 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab05357-8ea2-47be-96c5-641abf53afe0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.914918 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ab05357-8ea2-47be-96c5-641abf53afe0-scripts\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.914950 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ab05357-8ea2-47be-96c5-641abf53afe0-log-httpd\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.915024 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ab05357-8ea2-47be-96c5-641abf53afe0-config-data\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.915377 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ab05357-8ea2-47be-96c5-641abf53afe0-run-httpd\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.917311 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5ab05357-8ea2-47be-96c5-641abf53afe0-log-httpd\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.922174 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ab05357-8ea2-47be-96c5-641abf53afe0-scripts\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.923010 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ab05357-8ea2-47be-96c5-641abf53afe0-config-data\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.924419 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5ab05357-8ea2-47be-96c5-641abf53afe0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.928411 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ab05357-8ea2-47be-96c5-641abf53afe0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.930331 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ab05357-8ea2-47be-96c5-641abf53afe0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.938366 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72cxd\" (UniqueName: \"kubernetes.io/projected/5ab05357-8ea2-47be-96c5-641abf53afe0-kube-api-access-72cxd\") pod \"ceilometer-0\" (UID: \"5ab05357-8ea2-47be-96c5-641abf53afe0\") " pod="openstack/ceilometer-0" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.938930 4937 scope.go:117] "RemoveContainer" containerID="965994ae143ba897cec2fcb8dc27bf5e9b5bf3c7f6cf481bcdce95c7efa032a2" Jan 23 06:57:59 crc kubenswrapper[4937]: E0123 06:57:59.939442 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"965994ae143ba897cec2fcb8dc27bf5e9b5bf3c7f6cf481bcdce95c7efa032a2\": container with ID starting with 965994ae143ba897cec2fcb8dc27bf5e9b5bf3c7f6cf481bcdce95c7efa032a2 not found: ID does not exist" containerID="965994ae143ba897cec2fcb8dc27bf5e9b5bf3c7f6cf481bcdce95c7efa032a2" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.939479 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965994ae143ba897cec2fcb8dc27bf5e9b5bf3c7f6cf481bcdce95c7efa032a2"} err="failed to get container status \"965994ae143ba897cec2fcb8dc27bf5e9b5bf3c7f6cf481bcdce95c7efa032a2\": rpc error: code = NotFound desc = could not find container \"965994ae143ba897cec2fcb8dc27bf5e9b5bf3c7f6cf481bcdce95c7efa032a2\": container with ID starting with 965994ae143ba897cec2fcb8dc27bf5e9b5bf3c7f6cf481bcdce95c7efa032a2 not found: ID does not exist" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.939503 4937 scope.go:117] "RemoveContainer" containerID="4edaecd0d60cb4f5ab9cde312fbf2eb1fc9dc6393654ae7122d6b1f22e32b1e2" Jan 23 06:57:59 crc kubenswrapper[4937]: E0123 06:57:59.939828 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4edaecd0d60cb4f5ab9cde312fbf2eb1fc9dc6393654ae7122d6b1f22e32b1e2\": container with ID starting with 4edaecd0d60cb4f5ab9cde312fbf2eb1fc9dc6393654ae7122d6b1f22e32b1e2 not found: ID does not exist" containerID="4edaecd0d60cb4f5ab9cde312fbf2eb1fc9dc6393654ae7122d6b1f22e32b1e2" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.939931 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4edaecd0d60cb4f5ab9cde312fbf2eb1fc9dc6393654ae7122d6b1f22e32b1e2"} err="failed to get container status \"4edaecd0d60cb4f5ab9cde312fbf2eb1fc9dc6393654ae7122d6b1f22e32b1e2\": rpc error: code = NotFound desc = could not find container \"4edaecd0d60cb4f5ab9cde312fbf2eb1fc9dc6393654ae7122d6b1f22e32b1e2\": container with ID starting with 4edaecd0d60cb4f5ab9cde312fbf2eb1fc9dc6393654ae7122d6b1f22e32b1e2 not found: ID does not exist" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.940014 4937 scope.go:117] "RemoveContainer" containerID="87336ba32679678615c9d0cfc009f4c4538ec0ce6dc0c9349a07d6496f47a05b" Jan 23 06:57:59 crc kubenswrapper[4937]: E0123 06:57:59.940297 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87336ba32679678615c9d0cfc009f4c4538ec0ce6dc0c9349a07d6496f47a05b\": container with ID starting with 87336ba32679678615c9d0cfc009f4c4538ec0ce6dc0c9349a07d6496f47a05b not found: ID does not exist" containerID="87336ba32679678615c9d0cfc009f4c4538ec0ce6dc0c9349a07d6496f47a05b" Jan 23 06:57:59 crc kubenswrapper[4937]: I0123 06:57:59.940323 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87336ba32679678615c9d0cfc009f4c4538ec0ce6dc0c9349a07d6496f47a05b"} err="failed to get container status \"87336ba32679678615c9d0cfc009f4c4538ec0ce6dc0c9349a07d6496f47a05b\": rpc error: code = NotFound desc = could not find container \"87336ba32679678615c9d0cfc009f4c4538ec0ce6dc0c9349a07d6496f47a05b\": container with ID starting with 87336ba32679678615c9d0cfc009f4c4538ec0ce6dc0c9349a07d6496f47a05b not found: ID does not exist" Jan 23 06:58:00 crc kubenswrapper[4937]: I0123 06:58:00.102976 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 06:58:00 crc kubenswrapper[4937]: I0123 06:58:00.551763 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16893128-887d-469d-bc0d-63ece496c27d" path="/var/lib/kubelet/pods/16893128-887d-469d-bc0d-63ece496c27d/volumes" Jan 23 06:58:00 crc kubenswrapper[4937]: I0123 06:58:00.553452 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="222dfe15-3dc2-4157-9648-9f64e79dcf1c" path="/var/lib/kubelet/pods/222dfe15-3dc2-4157-9648-9f64e79dcf1c/volumes" Jan 23 06:58:00 crc kubenswrapper[4937]: I0123 06:58:00.555631 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5302d833-0e59-4aeb-8699-b782840b9fee" path="/var/lib/kubelet/pods/5302d833-0e59-4aeb-8699-b782840b9fee/volumes" Jan 23 06:58:00 crc kubenswrapper[4937]: I0123 06:58:00.612838 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 06:58:00 crc kubenswrapper[4937]: I0123 06:58:00.612886 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wp8f4" event={"ID":"495ea696-2703-4613-ac3f-390fce312e4f","Type":"ContainerStarted","Data":"c28351eb66419de87a5c798d42d8b1b2d6118e2834e10355fc9062b3d5100fff"} Jan 23 06:58:00 crc kubenswrapper[4937]: I0123 06:58:00.612913 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wp8f4" event={"ID":"495ea696-2703-4613-ac3f-390fce312e4f","Type":"ContainerStarted","Data":"4ed6d55f920230d7d6ef3c3f4671d815728639c8b9555230d9aad09c9015b92d"} Jan 23 06:58:00 crc kubenswrapper[4937]: W0123 06:58:00.615871 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ab05357_8ea2_47be_96c5_641abf53afe0.slice/crio-569080d3f0f5674a7aaa93fbcbc3673bc6f9b667a8df9945162e25eac9f427af WatchSource:0}: Error finding container 569080d3f0f5674a7aaa93fbcbc3673bc6f9b667a8df9945162e25eac9f427af: Status 404 returned error can't find the container with id 569080d3f0f5674a7aaa93fbcbc3673bc6f9b667a8df9945162e25eac9f427af Jan 23 06:58:00 crc kubenswrapper[4937]: I0123 06:58:00.629933 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-wp8f4" podStartSLOduration=2.629914807 podStartE2EDuration="2.629914807s" podCreationTimestamp="2026-01-23 06:57:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:58:00.629345571 +0000 UTC m=+1480.433112254" watchObservedRunningTime="2026-01-23 06:58:00.629914807 +0000 UTC m=+1480.433681460" Jan 23 06:58:01 crc kubenswrapper[4937]: I0123 06:58:01.629340 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ab05357-8ea2-47be-96c5-641abf53afe0","Type":"ContainerStarted","Data":"ffac14da1fd5b3c6870a79b6b0a120893df9475463ee1393efca3bb277b67f5a"} Jan 23 06:58:01 crc kubenswrapper[4937]: I0123 06:58:01.629842 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ab05357-8ea2-47be-96c5-641abf53afe0","Type":"ContainerStarted","Data":"20f9c8d24939a20542f015083c093fd417caea3b496d82104f143ac904b989b2"} Jan 23 06:58:01 crc kubenswrapper[4937]: I0123 06:58:01.629856 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ab05357-8ea2-47be-96c5-641abf53afe0","Type":"ContainerStarted","Data":"569080d3f0f5674a7aaa93fbcbc3673bc6f9b667a8df9945162e25eac9f427af"} Jan 23 06:58:02 crc kubenswrapper[4937]: I0123 06:58:02.646914 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ab05357-8ea2-47be-96c5-641abf53afe0","Type":"ContainerStarted","Data":"c5082319ecf6dd314de733e23136d69a903bb7d58bc3b6edb058ac853a1750a7"} Jan 23 06:58:03 crc kubenswrapper[4937]: I0123 06:58:03.889370 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 06:58:03 crc kubenswrapper[4937]: I0123 06:58:03.890685 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 06:58:04 crc kubenswrapper[4937]: I0123 06:58:04.673555 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5ab05357-8ea2-47be-96c5-641abf53afe0","Type":"ContainerStarted","Data":"79b1452dff96f97e050af1f9a462dccb75e03b7208f51c0b0d8f9e940f046921"} Jan 23 06:58:04 crc kubenswrapper[4937]: I0123 06:58:04.673974 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 06:58:04 crc kubenswrapper[4937]: I0123 06:58:04.712754 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.628337231 podStartE2EDuration="5.712733443s" podCreationTimestamp="2026-01-23 06:57:59 +0000 UTC" firstStartedPulling="2026-01-23 06:58:00.618008223 +0000 UTC m=+1480.421774896" lastFinishedPulling="2026-01-23 06:58:03.702404455 +0000 UTC m=+1483.506171108" observedRunningTime="2026-01-23 06:58:04.702797913 +0000 UTC m=+1484.506564566" watchObservedRunningTime="2026-01-23 06:58:04.712733443 +0000 UTC m=+1484.516500096" Jan 23 06:58:04 crc kubenswrapper[4937]: I0123 06:58:04.901818 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="462daef1-6c27-4ee2-abb2-e8f331256b3c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.226:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 06:58:04 crc kubenswrapper[4937]: I0123 06:58:04.901768 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="462daef1-6c27-4ee2-abb2-e8f331256b3c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.226:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 06:58:05 crc kubenswrapper[4937]: I0123 06:58:05.702433 4937 generic.go:334] "Generic (PLEG): container finished" podID="495ea696-2703-4613-ac3f-390fce312e4f" containerID="c28351eb66419de87a5c798d42d8b1b2d6118e2834e10355fc9062b3d5100fff" exitCode=0 Jan 23 06:58:05 crc kubenswrapper[4937]: I0123 06:58:05.705192 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wp8f4" event={"ID":"495ea696-2703-4613-ac3f-390fce312e4f","Type":"ContainerDied","Data":"c28351eb66419de87a5c798d42d8b1b2d6118e2834e10355fc9062b3d5100fff"} Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.131891 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wp8f4" Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.211691 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/495ea696-2703-4613-ac3f-390fce312e4f-scripts\") pod \"495ea696-2703-4613-ac3f-390fce312e4f\" (UID: \"495ea696-2703-4613-ac3f-390fce312e4f\") " Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.213067 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z8px\" (UniqueName: \"kubernetes.io/projected/495ea696-2703-4613-ac3f-390fce312e4f-kube-api-access-2z8px\") pod \"495ea696-2703-4613-ac3f-390fce312e4f\" (UID: \"495ea696-2703-4613-ac3f-390fce312e4f\") " Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.213145 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495ea696-2703-4613-ac3f-390fce312e4f-combined-ca-bundle\") pod \"495ea696-2703-4613-ac3f-390fce312e4f\" (UID: \"495ea696-2703-4613-ac3f-390fce312e4f\") " Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.213302 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/495ea696-2703-4613-ac3f-390fce312e4f-config-data\") pod \"495ea696-2703-4613-ac3f-390fce312e4f\" (UID: \"495ea696-2703-4613-ac3f-390fce312e4f\") " Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.218712 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/495ea696-2703-4613-ac3f-390fce312e4f-scripts" (OuterVolumeSpecName: "scripts") pod "495ea696-2703-4613-ac3f-390fce312e4f" (UID: "495ea696-2703-4613-ac3f-390fce312e4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.218747 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/495ea696-2703-4613-ac3f-390fce312e4f-kube-api-access-2z8px" (OuterVolumeSpecName: "kube-api-access-2z8px") pod "495ea696-2703-4613-ac3f-390fce312e4f" (UID: "495ea696-2703-4613-ac3f-390fce312e4f"). InnerVolumeSpecName "kube-api-access-2z8px". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.251308 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/495ea696-2703-4613-ac3f-390fce312e4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "495ea696-2703-4613-ac3f-390fce312e4f" (UID: "495ea696-2703-4613-ac3f-390fce312e4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.262125 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/495ea696-2703-4613-ac3f-390fce312e4f-config-data" (OuterVolumeSpecName: "config-data") pod "495ea696-2703-4613-ac3f-390fce312e4f" (UID: "495ea696-2703-4613-ac3f-390fce312e4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.316262 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/495ea696-2703-4613-ac3f-390fce312e4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.316454 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/495ea696-2703-4613-ac3f-390fce312e4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.316467 4937 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/495ea696-2703-4613-ac3f-390fce312e4f-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.316479 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z8px\" (UniqueName: \"kubernetes.io/projected/495ea696-2703-4613-ac3f-390fce312e4f-kube-api-access-2z8px\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.723824 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.723875 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.724116 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wp8f4" event={"ID":"495ea696-2703-4613-ac3f-390fce312e4f","Type":"ContainerDied","Data":"4ed6d55f920230d7d6ef3c3f4671d815728639c8b9555230d9aad09c9015b92d"} Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.724223 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wp8f4" Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.724226 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed6d55f920230d7d6ef3c3f4671d815728639c8b9555230d9aad09c9015b92d" Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.919117 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.919376 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fdefd02d-de3a-421a-8f1b-d5a121658469" containerName="nova-scheduler-scheduler" containerID="cri-o://6b4ab85b29add7dcdc2466c645a803fd92df5f3f174f69e5a7844c60535b67f8" gracePeriod=30 Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.931879 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.932211 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="462daef1-6c27-4ee2-abb2-e8f331256b3c" containerName="nova-api-api" containerID="cri-o://9e6542c6d3aec2c87b303ccdef6aa246254424e0ae17450cc8f19a760181d02c" gracePeriod=30 Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.932635 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="462daef1-6c27-4ee2-abb2-e8f331256b3c" containerName="nova-api-log" containerID="cri-o://b342b8a7c83e941197a14b5a09ff80e18622c9efca2769a9db5432f3b11a6ecc" gracePeriod=30 Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.952969 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.953537 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c99a5f22-8b8f-4e04-ac09-199c03bb1630" containerName="nova-metadata-log" containerID="cri-o://c7ec3504f51eb5c71ce7d58021b8405dc34af35d89065f0b0ae6ae7538d38019" gracePeriod=30 Jan 23 06:58:07 crc kubenswrapper[4937]: I0123 06:58:07.953648 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c99a5f22-8b8f-4e04-ac09-199c03bb1630" containerName="nova-metadata-metadata" containerID="cri-o://c174004bff251ef07a9ef6390a2acb4ab56d80f4358c0749fd7d7ba689a3cc3c" gracePeriod=30 Jan 23 06:58:08 crc kubenswrapper[4937]: I0123 06:58:08.734089 4937 generic.go:334] "Generic (PLEG): container finished" podID="c99a5f22-8b8f-4e04-ac09-199c03bb1630" containerID="c7ec3504f51eb5c71ce7d58021b8405dc34af35d89065f0b0ae6ae7538d38019" exitCode=143 Jan 23 06:58:08 crc kubenswrapper[4937]: I0123 06:58:08.734147 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c99a5f22-8b8f-4e04-ac09-199c03bb1630","Type":"ContainerDied","Data":"c7ec3504f51eb5c71ce7d58021b8405dc34af35d89065f0b0ae6ae7538d38019"} Jan 23 06:58:08 crc kubenswrapper[4937]: I0123 06:58:08.735839 4937 generic.go:334] "Generic (PLEG): container finished" podID="462daef1-6c27-4ee2-abb2-e8f331256b3c" containerID="b342b8a7c83e941197a14b5a09ff80e18622c9efca2769a9db5432f3b11a6ecc" exitCode=143 Jan 23 06:58:08 crc kubenswrapper[4937]: I0123 06:58:08.735863 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"462daef1-6c27-4ee2-abb2-e8f331256b3c","Type":"ContainerDied","Data":"b342b8a7c83e941197a14b5a09ff80e18622c9efca2769a9db5432f3b11a6ecc"} Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.414020 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.421285 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.567291 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-config-data\") pod \"462daef1-6c27-4ee2-abb2-e8f331256b3c\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.567351 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-internal-tls-certs\") pod \"462daef1-6c27-4ee2-abb2-e8f331256b3c\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.567422 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv7ct\" (UniqueName: \"kubernetes.io/projected/c99a5f22-8b8f-4e04-ac09-199c03bb1630-kube-api-access-xv7ct\") pod \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.567449 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-public-tls-certs\") pod \"462daef1-6c27-4ee2-abb2-e8f331256b3c\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.567525 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c99a5f22-8b8f-4e04-ac09-199c03bb1630-logs\") pod \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.567568 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99a5f22-8b8f-4e04-ac09-199c03bb1630-config-data\") pod \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.567702 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99a5f22-8b8f-4e04-ac09-199c03bb1630-combined-ca-bundle\") pod \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.567752 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlcpt\" (UniqueName: \"kubernetes.io/projected/462daef1-6c27-4ee2-abb2-e8f331256b3c-kube-api-access-nlcpt\") pod \"462daef1-6c27-4ee2-abb2-e8f331256b3c\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.567772 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/462daef1-6c27-4ee2-abb2-e8f331256b3c-logs\") pod \"462daef1-6c27-4ee2-abb2-e8f331256b3c\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.567887 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-combined-ca-bundle\") pod \"462daef1-6c27-4ee2-abb2-e8f331256b3c\" (UID: \"462daef1-6c27-4ee2-abb2-e8f331256b3c\") " Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.567933 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c99a5f22-8b8f-4e04-ac09-199c03bb1630-nova-metadata-tls-certs\") pod \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\" (UID: \"c99a5f22-8b8f-4e04-ac09-199c03bb1630\") " Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.568967 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/462daef1-6c27-4ee2-abb2-e8f331256b3c-logs" (OuterVolumeSpecName: "logs") pod "462daef1-6c27-4ee2-abb2-e8f331256b3c" (UID: "462daef1-6c27-4ee2-abb2-e8f331256b3c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.569409 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c99a5f22-8b8f-4e04-ac09-199c03bb1630-logs" (OuterVolumeSpecName: "logs") pod "c99a5f22-8b8f-4e04-ac09-199c03bb1630" (UID: "c99a5f22-8b8f-4e04-ac09-199c03bb1630"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.572863 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/462daef1-6c27-4ee2-abb2-e8f331256b3c-kube-api-access-nlcpt" (OuterVolumeSpecName: "kube-api-access-nlcpt") pod "462daef1-6c27-4ee2-abb2-e8f331256b3c" (UID: "462daef1-6c27-4ee2-abb2-e8f331256b3c"). InnerVolumeSpecName "kube-api-access-nlcpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.585709 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99a5f22-8b8f-4e04-ac09-199c03bb1630-kube-api-access-xv7ct" (OuterVolumeSpecName: "kube-api-access-xv7ct") pod "c99a5f22-8b8f-4e04-ac09-199c03bb1630" (UID: "c99a5f22-8b8f-4e04-ac09-199c03bb1630"). InnerVolumeSpecName "kube-api-access-xv7ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.619998 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "462daef1-6c27-4ee2-abb2-e8f331256b3c" (UID: "462daef1-6c27-4ee2-abb2-e8f331256b3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.626480 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99a5f22-8b8f-4e04-ac09-199c03bb1630-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c99a5f22-8b8f-4e04-ac09-199c03bb1630" (UID: "c99a5f22-8b8f-4e04-ac09-199c03bb1630"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.632125 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-config-data" (OuterVolumeSpecName: "config-data") pod "462daef1-6c27-4ee2-abb2-e8f331256b3c" (UID: "462daef1-6c27-4ee2-abb2-e8f331256b3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.643723 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99a5f22-8b8f-4e04-ac09-199c03bb1630-config-data" (OuterVolumeSpecName: "config-data") pod "c99a5f22-8b8f-4e04-ac09-199c03bb1630" (UID: "c99a5f22-8b8f-4e04-ac09-199c03bb1630"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.670524 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.670557 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.670565 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv7ct\" (UniqueName: \"kubernetes.io/projected/c99a5f22-8b8f-4e04-ac09-199c03bb1630-kube-api-access-xv7ct\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.670577 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c99a5f22-8b8f-4e04-ac09-199c03bb1630-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.670616 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99a5f22-8b8f-4e04-ac09-199c03bb1630-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.670629 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99a5f22-8b8f-4e04-ac09-199c03bb1630-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.670639 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlcpt\" (UniqueName: \"kubernetes.io/projected/462daef1-6c27-4ee2-abb2-e8f331256b3c-kube-api-access-nlcpt\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.670651 4937 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/462daef1-6c27-4ee2-abb2-e8f331256b3c-logs\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.671487 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "462daef1-6c27-4ee2-abb2-e8f331256b3c" (UID: "462daef1-6c27-4ee2-abb2-e8f331256b3c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.683844 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "462daef1-6c27-4ee2-abb2-e8f331256b3c" (UID: "462daef1-6c27-4ee2-abb2-e8f331256b3c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.689458 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99a5f22-8b8f-4e04-ac09-199c03bb1630-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c99a5f22-8b8f-4e04-ac09-199c03bb1630" (UID: "c99a5f22-8b8f-4e04-ac09-199c03bb1630"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.754819 4937 generic.go:334] "Generic (PLEG): container finished" podID="fdefd02d-de3a-421a-8f1b-d5a121658469" containerID="6b4ab85b29add7dcdc2466c645a803fd92df5f3f174f69e5a7844c60535b67f8" exitCode=0 Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.754899 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdefd02d-de3a-421a-8f1b-d5a121658469","Type":"ContainerDied","Data":"6b4ab85b29add7dcdc2466c645a803fd92df5f3f174f69e5a7844c60535b67f8"} Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.757685 4937 generic.go:334] "Generic (PLEG): container finished" podID="c99a5f22-8b8f-4e04-ac09-199c03bb1630" containerID="c174004bff251ef07a9ef6390a2acb4ab56d80f4358c0749fd7d7ba689a3cc3c" exitCode=0 Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.757747 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c99a5f22-8b8f-4e04-ac09-199c03bb1630","Type":"ContainerDied","Data":"c174004bff251ef07a9ef6390a2acb4ab56d80f4358c0749fd7d7ba689a3cc3c"} Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.757774 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c99a5f22-8b8f-4e04-ac09-199c03bb1630","Type":"ContainerDied","Data":"c2fa04332e86b32bd5889b029ad4b008e031c07e68994690529919bf6b0392dd"} Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.757795 4937 scope.go:117] "RemoveContainer" containerID="c174004bff251ef07a9ef6390a2acb4ab56d80f4358c0749fd7d7ba689a3cc3c" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.757943 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.768502 4937 generic.go:334] "Generic (PLEG): container finished" podID="462daef1-6c27-4ee2-abb2-e8f331256b3c" containerID="9e6542c6d3aec2c87b303ccdef6aa246254424e0ae17450cc8f19a760181d02c" exitCode=0 Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.768546 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"462daef1-6c27-4ee2-abb2-e8f331256b3c","Type":"ContainerDied","Data":"9e6542c6d3aec2c87b303ccdef6aa246254424e0ae17450cc8f19a760181d02c"} Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.768572 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"462daef1-6c27-4ee2-abb2-e8f331256b3c","Type":"ContainerDied","Data":"506d0203407a4224010b4c52ca92960fb4f9b52e089bc206dbfa92a762cdd4b4"} Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.768674 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.774825 4937 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c99a5f22-8b8f-4e04-ac09-199c03bb1630-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.774856 4937 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.774865 4937 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/462daef1-6c27-4ee2-abb2-e8f331256b3c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.798664 4937 scope.go:117] "RemoveContainer" containerID="c7ec3504f51eb5c71ce7d58021b8405dc34af35d89065f0b0ae6ae7538d38019" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.810355 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.820160 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.831541 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.842867 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.856875 4937 scope.go:117] "RemoveContainer" containerID="c174004bff251ef07a9ef6390a2acb4ab56d80f4358c0749fd7d7ba689a3cc3c" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.859533 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 06:58:09 crc kubenswrapper[4937]: E0123 06:58:09.859964 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="462daef1-6c27-4ee2-abb2-e8f331256b3c" containerName="nova-api-api" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.859982 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="462daef1-6c27-4ee2-abb2-e8f331256b3c" containerName="nova-api-api" Jan 23 06:58:09 crc kubenswrapper[4937]: E0123 06:58:09.860003 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99a5f22-8b8f-4e04-ac09-199c03bb1630" containerName="nova-metadata-log" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.860010 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99a5f22-8b8f-4e04-ac09-199c03bb1630" containerName="nova-metadata-log" Jan 23 06:58:09 crc kubenswrapper[4937]: E0123 06:58:09.860026 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="495ea696-2703-4613-ac3f-390fce312e4f" containerName="nova-manage" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.860032 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="495ea696-2703-4613-ac3f-390fce312e4f" containerName="nova-manage" Jan 23 06:58:09 crc kubenswrapper[4937]: E0123 06:58:09.860044 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99a5f22-8b8f-4e04-ac09-199c03bb1630" containerName="nova-metadata-metadata" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.860051 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99a5f22-8b8f-4e04-ac09-199c03bb1630" containerName="nova-metadata-metadata" Jan 23 06:58:09 crc kubenswrapper[4937]: E0123 06:58:09.860060 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="462daef1-6c27-4ee2-abb2-e8f331256b3c" containerName="nova-api-log" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.860065 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="462daef1-6c27-4ee2-abb2-e8f331256b3c" containerName="nova-api-log" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.860242 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="495ea696-2703-4613-ac3f-390fce312e4f" containerName="nova-manage" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.860260 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="462daef1-6c27-4ee2-abb2-e8f331256b3c" containerName="nova-api-log" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.860268 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99a5f22-8b8f-4e04-ac09-199c03bb1630" containerName="nova-metadata-metadata" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.860277 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="462daef1-6c27-4ee2-abb2-e8f331256b3c" containerName="nova-api-api" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.860290 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99a5f22-8b8f-4e04-ac09-199c03bb1630" containerName="nova-metadata-log" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.861308 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:58:09 crc kubenswrapper[4937]: E0123 06:58:09.862266 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c174004bff251ef07a9ef6390a2acb4ab56d80f4358c0749fd7d7ba689a3cc3c\": container with ID starting with c174004bff251ef07a9ef6390a2acb4ab56d80f4358c0749fd7d7ba689a3cc3c not found: ID does not exist" containerID="c174004bff251ef07a9ef6390a2acb4ab56d80f4358c0749fd7d7ba689a3cc3c" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.862294 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c174004bff251ef07a9ef6390a2acb4ab56d80f4358c0749fd7d7ba689a3cc3c"} err="failed to get container status \"c174004bff251ef07a9ef6390a2acb4ab56d80f4358c0749fd7d7ba689a3cc3c\": rpc error: code = NotFound desc = could not find container \"c174004bff251ef07a9ef6390a2acb4ab56d80f4358c0749fd7d7ba689a3cc3c\": container with ID starting with c174004bff251ef07a9ef6390a2acb4ab56d80f4358c0749fd7d7ba689a3cc3c not found: ID does not exist" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.862318 4937 scope.go:117] "RemoveContainer" containerID="c7ec3504f51eb5c71ce7d58021b8405dc34af35d89065f0b0ae6ae7538d38019" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.864493 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 06:58:09 crc kubenswrapper[4937]: E0123 06:58:09.864658 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7ec3504f51eb5c71ce7d58021b8405dc34af35d89065f0b0ae6ae7538d38019\": container with ID starting with c7ec3504f51eb5c71ce7d58021b8405dc34af35d89065f0b0ae6ae7538d38019 not found: ID does not exist" containerID="c7ec3504f51eb5c71ce7d58021b8405dc34af35d89065f0b0ae6ae7538d38019" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.864704 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7ec3504f51eb5c71ce7d58021b8405dc34af35d89065f0b0ae6ae7538d38019"} err="failed to get container status \"c7ec3504f51eb5c71ce7d58021b8405dc34af35d89065f0b0ae6ae7538d38019\": rpc error: code = NotFound desc = could not find container \"c7ec3504f51eb5c71ce7d58021b8405dc34af35d89065f0b0ae6ae7538d38019\": container with ID starting with c7ec3504f51eb5c71ce7d58021b8405dc34af35d89065f0b0ae6ae7538d38019 not found: ID does not exist" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.864726 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.864736 4937 scope.go:117] "RemoveContainer" containerID="9e6542c6d3aec2c87b303ccdef6aa246254424e0ae17450cc8f19a760181d02c" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.864969 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.871397 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.882375 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.886731 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.890093 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.890189 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.893184 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.909551 4937 scope.go:117] "RemoveContainer" containerID="b342b8a7c83e941197a14b5a09ff80e18622c9efca2769a9db5432f3b11a6ecc" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.939381 4937 scope.go:117] "RemoveContainer" containerID="9e6542c6d3aec2c87b303ccdef6aa246254424e0ae17450cc8f19a760181d02c" Jan 23 06:58:09 crc kubenswrapper[4937]: E0123 06:58:09.939879 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e6542c6d3aec2c87b303ccdef6aa246254424e0ae17450cc8f19a760181d02c\": container with ID starting with 9e6542c6d3aec2c87b303ccdef6aa246254424e0ae17450cc8f19a760181d02c not found: ID does not exist" containerID="9e6542c6d3aec2c87b303ccdef6aa246254424e0ae17450cc8f19a760181d02c" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.939905 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e6542c6d3aec2c87b303ccdef6aa246254424e0ae17450cc8f19a760181d02c"} err="failed to get container status \"9e6542c6d3aec2c87b303ccdef6aa246254424e0ae17450cc8f19a760181d02c\": rpc error: code = NotFound desc = could not find container \"9e6542c6d3aec2c87b303ccdef6aa246254424e0ae17450cc8f19a760181d02c\": container with ID starting with 9e6542c6d3aec2c87b303ccdef6aa246254424e0ae17450cc8f19a760181d02c not found: ID does not exist" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.939926 4937 scope.go:117] "RemoveContainer" containerID="b342b8a7c83e941197a14b5a09ff80e18622c9efca2769a9db5432f3b11a6ecc" Jan 23 06:58:09 crc kubenswrapper[4937]: E0123 06:58:09.940219 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b342b8a7c83e941197a14b5a09ff80e18622c9efca2769a9db5432f3b11a6ecc\": container with ID starting with b342b8a7c83e941197a14b5a09ff80e18622c9efca2769a9db5432f3b11a6ecc not found: ID does not exist" containerID="b342b8a7c83e941197a14b5a09ff80e18622c9efca2769a9db5432f3b11a6ecc" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.940234 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b342b8a7c83e941197a14b5a09ff80e18622c9efca2769a9db5432f3b11a6ecc"} err="failed to get container status \"b342b8a7c83e941197a14b5a09ff80e18622c9efca2769a9db5432f3b11a6ecc\": rpc error: code = NotFound desc = could not find container \"b342b8a7c83e941197a14b5a09ff80e18622c9efca2769a9db5432f3b11a6ecc\": container with ID starting with b342b8a7c83e941197a14b5a09ff80e18622c9efca2769a9db5432f3b11a6ecc not found: ID does not exist" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.977653 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcf63c3-cd33-4ce7-92cd-78d3001b33dc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc\") " pod="openstack/nova-api-0" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.977703 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdmj9\" (UniqueName: \"kubernetes.io/projected/77e26d95-217b-443b-9c9c-ff2246f5aeb4-kube-api-access-kdmj9\") pod \"nova-metadata-0\" (UID: \"77e26d95-217b-443b-9c9c-ff2246f5aeb4\") " pod="openstack/nova-metadata-0" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.977754 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e26d95-217b-443b-9c9c-ff2246f5aeb4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"77e26d95-217b-443b-9c9c-ff2246f5aeb4\") " pod="openstack/nova-metadata-0" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.977797 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbcf63c3-cd33-4ce7-92cd-78d3001b33dc-logs\") pod \"nova-api-0\" (UID: \"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc\") " pod="openstack/nova-api-0" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.977825 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcf63c3-cd33-4ce7-92cd-78d3001b33dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc\") " pod="openstack/nova-api-0" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.977848 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77e26d95-217b-443b-9c9c-ff2246f5aeb4-logs\") pod \"nova-metadata-0\" (UID: \"77e26d95-217b-443b-9c9c-ff2246f5aeb4\") " pod="openstack/nova-metadata-0" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.977866 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckkjb\" (UniqueName: \"kubernetes.io/projected/bbcf63c3-cd33-4ce7-92cd-78d3001b33dc-kube-api-access-ckkjb\") pod \"nova-api-0\" (UID: \"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc\") " pod="openstack/nova-api-0" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.977884 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcf63c3-cd33-4ce7-92cd-78d3001b33dc-config-data\") pod \"nova-api-0\" (UID: \"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc\") " pod="openstack/nova-api-0" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.977901 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e26d95-217b-443b-9c9c-ff2246f5aeb4-config-data\") pod \"nova-metadata-0\" (UID: \"77e26d95-217b-443b-9c9c-ff2246f5aeb4\") " pod="openstack/nova-metadata-0" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.977939 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcf63c3-cd33-4ce7-92cd-78d3001b33dc-public-tls-certs\") pod \"nova-api-0\" (UID: \"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc\") " pod="openstack/nova-api-0" Jan 23 06:58:09 crc kubenswrapper[4937]: I0123 06:58:09.978068 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/77e26d95-217b-443b-9c9c-ff2246f5aeb4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"77e26d95-217b-443b-9c9c-ff2246f5aeb4\") " pod="openstack/nova-metadata-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.060657 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.079854 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbcf63c3-cd33-4ce7-92cd-78d3001b33dc-logs\") pod \"nova-api-0\" (UID: \"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc\") " pod="openstack/nova-api-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.079916 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcf63c3-cd33-4ce7-92cd-78d3001b33dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc\") " pod="openstack/nova-api-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.079957 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77e26d95-217b-443b-9c9c-ff2246f5aeb4-logs\") pod \"nova-metadata-0\" (UID: \"77e26d95-217b-443b-9c9c-ff2246f5aeb4\") " pod="openstack/nova-metadata-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.079990 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckkjb\" (UniqueName: \"kubernetes.io/projected/bbcf63c3-cd33-4ce7-92cd-78d3001b33dc-kube-api-access-ckkjb\") pod \"nova-api-0\" (UID: \"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc\") " pod="openstack/nova-api-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.080021 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcf63c3-cd33-4ce7-92cd-78d3001b33dc-config-data\") pod \"nova-api-0\" (UID: \"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc\") " pod="openstack/nova-api-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.080044 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e26d95-217b-443b-9c9c-ff2246f5aeb4-config-data\") pod \"nova-metadata-0\" (UID: \"77e26d95-217b-443b-9c9c-ff2246f5aeb4\") " pod="openstack/nova-metadata-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.080096 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcf63c3-cd33-4ce7-92cd-78d3001b33dc-public-tls-certs\") pod \"nova-api-0\" (UID: \"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc\") " pod="openstack/nova-api-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.080123 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/77e26d95-217b-443b-9c9c-ff2246f5aeb4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"77e26d95-217b-443b-9c9c-ff2246f5aeb4\") " pod="openstack/nova-metadata-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.080206 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcf63c3-cd33-4ce7-92cd-78d3001b33dc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc\") " pod="openstack/nova-api-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.080245 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdmj9\" (UniqueName: \"kubernetes.io/projected/77e26d95-217b-443b-9c9c-ff2246f5aeb4-kube-api-access-kdmj9\") pod \"nova-metadata-0\" (UID: \"77e26d95-217b-443b-9c9c-ff2246f5aeb4\") " pod="openstack/nova-metadata-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.080303 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e26d95-217b-443b-9c9c-ff2246f5aeb4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"77e26d95-217b-443b-9c9c-ff2246f5aeb4\") " pod="openstack/nova-metadata-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.092220 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbcf63c3-cd33-4ce7-92cd-78d3001b33dc-logs\") pod \"nova-api-0\" (UID: \"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc\") " pod="openstack/nova-api-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.092539 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77e26d95-217b-443b-9c9c-ff2246f5aeb4-logs\") pod \"nova-metadata-0\" (UID: \"77e26d95-217b-443b-9c9c-ff2246f5aeb4\") " pod="openstack/nova-metadata-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.100309 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcf63c3-cd33-4ce7-92cd-78d3001b33dc-public-tls-certs\") pod \"nova-api-0\" (UID: \"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc\") " pod="openstack/nova-api-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.109776 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcf63c3-cd33-4ce7-92cd-78d3001b33dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc\") " pod="openstack/nova-api-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.122801 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckkjb\" (UniqueName: \"kubernetes.io/projected/bbcf63c3-cd33-4ce7-92cd-78d3001b33dc-kube-api-access-ckkjb\") pod \"nova-api-0\" (UID: \"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc\") " pod="openstack/nova-api-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.124967 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdmj9\" (UniqueName: \"kubernetes.io/projected/77e26d95-217b-443b-9c9c-ff2246f5aeb4-kube-api-access-kdmj9\") pod \"nova-metadata-0\" (UID: \"77e26d95-217b-443b-9c9c-ff2246f5aeb4\") " pod="openstack/nova-metadata-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.128355 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/77e26d95-217b-443b-9c9c-ff2246f5aeb4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"77e26d95-217b-443b-9c9c-ff2246f5aeb4\") " pod="openstack/nova-metadata-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.129974 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcf63c3-cd33-4ce7-92cd-78d3001b33dc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc\") " pod="openstack/nova-api-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.130480 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcf63c3-cd33-4ce7-92cd-78d3001b33dc-config-data\") pod \"nova-api-0\" (UID: \"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc\") " pod="openstack/nova-api-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.132289 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e26d95-217b-443b-9c9c-ff2246f5aeb4-config-data\") pod \"nova-metadata-0\" (UID: \"77e26d95-217b-443b-9c9c-ff2246f5aeb4\") " pod="openstack/nova-metadata-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.141858 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e26d95-217b-443b-9c9c-ff2246f5aeb4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"77e26d95-217b-443b-9c9c-ff2246f5aeb4\") " pod="openstack/nova-metadata-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.183282 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8l66\" (UniqueName: \"kubernetes.io/projected/fdefd02d-de3a-421a-8f1b-d5a121658469-kube-api-access-h8l66\") pod \"fdefd02d-de3a-421a-8f1b-d5a121658469\" (UID: \"fdefd02d-de3a-421a-8f1b-d5a121658469\") " Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.183457 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdefd02d-de3a-421a-8f1b-d5a121658469-combined-ca-bundle\") pod \"fdefd02d-de3a-421a-8f1b-d5a121658469\" (UID: \"fdefd02d-de3a-421a-8f1b-d5a121658469\") " Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.183499 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdefd02d-de3a-421a-8f1b-d5a121658469-config-data\") pod \"fdefd02d-de3a-421a-8f1b-d5a121658469\" (UID: \"fdefd02d-de3a-421a-8f1b-d5a121658469\") " Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.192309 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.193221 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdefd02d-de3a-421a-8f1b-d5a121658469-kube-api-access-h8l66" (OuterVolumeSpecName: "kube-api-access-h8l66") pod "fdefd02d-de3a-421a-8f1b-d5a121658469" (UID: "fdefd02d-de3a-421a-8f1b-d5a121658469"). InnerVolumeSpecName "kube-api-access-h8l66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.215954 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.252400 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdefd02d-de3a-421a-8f1b-d5a121658469-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdefd02d-de3a-421a-8f1b-d5a121658469" (UID: "fdefd02d-de3a-421a-8f1b-d5a121658469"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.279857 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdefd02d-de3a-421a-8f1b-d5a121658469-config-data" (OuterVolumeSpecName: "config-data") pod "fdefd02d-de3a-421a-8f1b-d5a121658469" (UID: "fdefd02d-de3a-421a-8f1b-d5a121658469"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.286296 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8l66\" (UniqueName: \"kubernetes.io/projected/fdefd02d-de3a-421a-8f1b-d5a121658469-kube-api-access-h8l66\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.286329 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdefd02d-de3a-421a-8f1b-d5a121658469-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.286338 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdefd02d-de3a-421a-8f1b-d5a121658469-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.540010 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="462daef1-6c27-4ee2-abb2-e8f331256b3c" path="/var/lib/kubelet/pods/462daef1-6c27-4ee2-abb2-e8f331256b3c/volumes" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.541422 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c99a5f22-8b8f-4e04-ac09-199c03bb1630" path="/var/lib/kubelet/pods/c99a5f22-8b8f-4e04-ac09-199c03bb1630/volumes" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.715982 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.733097 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 06:58:10 crc kubenswrapper[4937]: W0123 06:58:10.734781 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77e26d95_217b_443b_9c9c_ff2246f5aeb4.slice/crio-b4d48c07c8e323b0c0c8cd02a44430afe07fc7f731459eca71cf4f1f94ea0c0f WatchSource:0}: Error finding container b4d48c07c8e323b0c0c8cd02a44430afe07fc7f731459eca71cf4f1f94ea0c0f: Status 404 returned error can't find the container with id b4d48c07c8e323b0c0c8cd02a44430afe07fc7f731459eca71cf4f1f94ea0c0f Jan 23 06:58:10 crc kubenswrapper[4937]: W0123 06:58:10.735679 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbcf63c3_cd33_4ce7_92cd_78d3001b33dc.slice/crio-c181683afec08f1f34fe460174c6522a1b9ee75da7e6072759d21b8e138139e1 WatchSource:0}: Error finding container c181683afec08f1f34fe460174c6522a1b9ee75da7e6072759d21b8e138139e1: Status 404 returned error can't find the container with id c181683afec08f1f34fe460174c6522a1b9ee75da7e6072759d21b8e138139e1 Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.791392 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"77e26d95-217b-443b-9c9c-ff2246f5aeb4","Type":"ContainerStarted","Data":"b4d48c07c8e323b0c0c8cd02a44430afe07fc7f731459eca71cf4f1f94ea0c0f"} Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.794268 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc","Type":"ContainerStarted","Data":"c181683afec08f1f34fe460174c6522a1b9ee75da7e6072759d21b8e138139e1"} Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.796883 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fdefd02d-de3a-421a-8f1b-d5a121658469","Type":"ContainerDied","Data":"4dc17d8281124f377f8f3e64850c24f6926e73a14c0dadc6461fa0c0cd624596"} Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.796914 4937 scope.go:117] "RemoveContainer" containerID="6b4ab85b29add7dcdc2466c645a803fd92df5f3f174f69e5a7844c60535b67f8" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.796988 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.958660 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.969083 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.980301 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:58:10 crc kubenswrapper[4937]: E0123 06:58:10.980962 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdefd02d-de3a-421a-8f1b-d5a121658469" containerName="nova-scheduler-scheduler" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.981000 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdefd02d-de3a-421a-8f1b-d5a121658469" containerName="nova-scheduler-scheduler" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.981347 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdefd02d-de3a-421a-8f1b-d5a121658469" containerName="nova-scheduler-scheduler" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.982375 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.986245 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 06:58:10 crc kubenswrapper[4937]: I0123 06:58:10.987163 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:58:11 crc kubenswrapper[4937]: I0123 06:58:11.105920 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39fb1734-ce06-4cda-ae43-e5152113b20a-config-data\") pod \"nova-scheduler-0\" (UID: \"39fb1734-ce06-4cda-ae43-e5152113b20a\") " pod="openstack/nova-scheduler-0" Jan 23 06:58:11 crc kubenswrapper[4937]: I0123 06:58:11.106035 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkx8w\" (UniqueName: \"kubernetes.io/projected/39fb1734-ce06-4cda-ae43-e5152113b20a-kube-api-access-xkx8w\") pod \"nova-scheduler-0\" (UID: \"39fb1734-ce06-4cda-ae43-e5152113b20a\") " pod="openstack/nova-scheduler-0" Jan 23 06:58:11 crc kubenswrapper[4937]: I0123 06:58:11.106063 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39fb1734-ce06-4cda-ae43-e5152113b20a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"39fb1734-ce06-4cda-ae43-e5152113b20a\") " pod="openstack/nova-scheduler-0" Jan 23 06:58:11 crc kubenswrapper[4937]: I0123 06:58:11.208274 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39fb1734-ce06-4cda-ae43-e5152113b20a-config-data\") pod \"nova-scheduler-0\" (UID: \"39fb1734-ce06-4cda-ae43-e5152113b20a\") " pod="openstack/nova-scheduler-0" Jan 23 06:58:11 crc kubenswrapper[4937]: I0123 06:58:11.208802 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkx8w\" (UniqueName: \"kubernetes.io/projected/39fb1734-ce06-4cda-ae43-e5152113b20a-kube-api-access-xkx8w\") pod \"nova-scheduler-0\" (UID: \"39fb1734-ce06-4cda-ae43-e5152113b20a\") " pod="openstack/nova-scheduler-0" Jan 23 06:58:11 crc kubenswrapper[4937]: I0123 06:58:11.209292 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39fb1734-ce06-4cda-ae43-e5152113b20a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"39fb1734-ce06-4cda-ae43-e5152113b20a\") " pod="openstack/nova-scheduler-0" Jan 23 06:58:11 crc kubenswrapper[4937]: I0123 06:58:11.213324 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39fb1734-ce06-4cda-ae43-e5152113b20a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"39fb1734-ce06-4cda-ae43-e5152113b20a\") " pod="openstack/nova-scheduler-0" Jan 23 06:58:11 crc kubenswrapper[4937]: I0123 06:58:11.213928 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39fb1734-ce06-4cda-ae43-e5152113b20a-config-data\") pod \"nova-scheduler-0\" (UID: \"39fb1734-ce06-4cda-ae43-e5152113b20a\") " pod="openstack/nova-scheduler-0" Jan 23 06:58:11 crc kubenswrapper[4937]: I0123 06:58:11.232625 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkx8w\" (UniqueName: \"kubernetes.io/projected/39fb1734-ce06-4cda-ae43-e5152113b20a-kube-api-access-xkx8w\") pod \"nova-scheduler-0\" (UID: \"39fb1734-ce06-4cda-ae43-e5152113b20a\") " pod="openstack/nova-scheduler-0" Jan 23 06:58:11 crc kubenswrapper[4937]: I0123 06:58:11.302307 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 06:58:11 crc kubenswrapper[4937]: W0123 06:58:11.815290 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39fb1734_ce06_4cda_ae43_e5152113b20a.slice/crio-a44ee1f2c510cc34b48c15e09b90c8954d4f459ec471a1ed52b990f23b6cd8b1 WatchSource:0}: Error finding container a44ee1f2c510cc34b48c15e09b90c8954d4f459ec471a1ed52b990f23b6cd8b1: Status 404 returned error can't find the container with id a44ee1f2c510cc34b48c15e09b90c8954d4f459ec471a1ed52b990f23b6cd8b1 Jan 23 06:58:11 crc kubenswrapper[4937]: I0123 06:58:11.816415 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 06:58:11 crc kubenswrapper[4937]: I0123 06:58:11.822922 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"77e26d95-217b-443b-9c9c-ff2246f5aeb4","Type":"ContainerStarted","Data":"74690e3be0cbfcc33a24e600dabe8b6fb90c368f877bfdb0738f26dfc3540dcd"} Jan 23 06:58:11 crc kubenswrapper[4937]: I0123 06:58:11.822974 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"77e26d95-217b-443b-9c9c-ff2246f5aeb4","Type":"ContainerStarted","Data":"19dd17ec10f7ce74bcd75e10dfec6ee6c0ee2355bf705a9f18d18415b96b6c96"} Jan 23 06:58:11 crc kubenswrapper[4937]: I0123 06:58:11.826268 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc","Type":"ContainerStarted","Data":"51e0a191a10620687f799fcac95615815bddcfcce53245511ebf53160b7587ee"} Jan 23 06:58:11 crc kubenswrapper[4937]: I0123 06:58:11.826315 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbcf63c3-cd33-4ce7-92cd-78d3001b33dc","Type":"ContainerStarted","Data":"3d68f9476b5c540b396f6f666e8d96c078bb1d031c5f23b4e27c61e59843dbc9"} Jan 23 06:58:11 crc kubenswrapper[4937]: I0123 06:58:11.859490 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.8594712700000002 podStartE2EDuration="2.85947127s" podCreationTimestamp="2026-01-23 06:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:58:11.841239923 +0000 UTC m=+1491.645006586" watchObservedRunningTime="2026-01-23 06:58:11.85947127 +0000 UTC m=+1491.663237923" Jan 23 06:58:11 crc kubenswrapper[4937]: I0123 06:58:11.888051 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.888034288 podStartE2EDuration="2.888034288s" podCreationTimestamp="2026-01-23 06:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:58:11.880073662 +0000 UTC m=+1491.683840315" watchObservedRunningTime="2026-01-23 06:58:11.888034288 +0000 UTC m=+1491.691800941" Jan 23 06:58:12 crc kubenswrapper[4937]: I0123 06:58:12.545257 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdefd02d-de3a-421a-8f1b-d5a121658469" path="/var/lib/kubelet/pods/fdefd02d-de3a-421a-8f1b-d5a121658469/volumes" Jan 23 06:58:12 crc kubenswrapper[4937]: I0123 06:58:12.840451 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"39fb1734-ce06-4cda-ae43-e5152113b20a","Type":"ContainerStarted","Data":"e85f3c7dfc6ea0d682549c0f6d8f660ff4b1060ac787765c6b8cfd3104326251"} Jan 23 06:58:12 crc kubenswrapper[4937]: I0123 06:58:12.840518 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"39fb1734-ce06-4cda-ae43-e5152113b20a","Type":"ContainerStarted","Data":"a44ee1f2c510cc34b48c15e09b90c8954d4f459ec471a1ed52b990f23b6cd8b1"} Jan 23 06:58:12 crc kubenswrapper[4937]: I0123 06:58:12.871755 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.871729094 podStartE2EDuration="2.871729094s" podCreationTimestamp="2026-01-23 06:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:58:12.857689821 +0000 UTC m=+1492.661456474" watchObservedRunningTime="2026-01-23 06:58:12.871729094 +0000 UTC m=+1492.675495757" Jan 23 06:58:15 crc kubenswrapper[4937]: I0123 06:58:15.217603 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 06:58:15 crc kubenswrapper[4937]: I0123 06:58:15.217848 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 06:58:16 crc kubenswrapper[4937]: I0123 06:58:16.303906 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 06:58:20 crc kubenswrapper[4937]: I0123 06:58:20.193528 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 06:58:20 crc kubenswrapper[4937]: I0123 06:58:20.194023 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 06:58:20 crc kubenswrapper[4937]: I0123 06:58:20.216932 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 06:58:20 crc kubenswrapper[4937]: I0123 06:58:20.217298 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 06:58:21 crc kubenswrapper[4937]: I0123 06:58:21.223871 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bbcf63c3-cd33-4ce7-92cd-78d3001b33dc" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.229:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 06:58:21 crc kubenswrapper[4937]: I0123 06:58:21.223913 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bbcf63c3-cd33-4ce7-92cd-78d3001b33dc" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.229:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 06:58:21 crc kubenswrapper[4937]: I0123 06:58:21.236790 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="77e26d95-217b-443b-9c9c-ff2246f5aeb4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.230:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 06:58:21 crc kubenswrapper[4937]: I0123 06:58:21.236871 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="77e26d95-217b-443b-9c9c-ff2246f5aeb4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.230:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 06:58:21 crc kubenswrapper[4937]: I0123 06:58:21.303113 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 06:58:21 crc kubenswrapper[4937]: I0123 06:58:21.337329 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 06:58:21 crc kubenswrapper[4937]: I0123 06:58:21.932935 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 06:58:30 crc kubenswrapper[4937]: I0123 06:58:30.110222 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 06:58:30 crc kubenswrapper[4937]: I0123 06:58:30.211703 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 06:58:30 crc kubenswrapper[4937]: I0123 06:58:30.213399 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 06:58:30 crc kubenswrapper[4937]: I0123 06:58:30.213780 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 06:58:30 crc kubenswrapper[4937]: I0123 06:58:30.227460 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 06:58:30 crc kubenswrapper[4937]: I0123 06:58:30.228900 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 06:58:30 crc kubenswrapper[4937]: I0123 06:58:30.235042 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 06:58:30 crc kubenswrapper[4937]: I0123 06:58:30.240530 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 06:58:30 crc kubenswrapper[4937]: I0123 06:58:30.970003 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 06:58:30 crc kubenswrapper[4937]: I0123 06:58:30.977088 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 06:58:30 crc kubenswrapper[4937]: I0123 06:58:30.995487 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 06:58:37 crc kubenswrapper[4937]: I0123 06:58:37.724395 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 06:58:37 crc kubenswrapper[4937]: I0123 06:58:37.724995 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 06:58:37 crc kubenswrapper[4937]: I0123 06:58:37.725050 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 06:58:37 crc kubenswrapper[4937]: I0123 06:58:37.725801 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a10facfc39aee45969b033e840bb8f688c490ef6b6d364e9ea5f238aa72a3af0"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 06:58:37 crc kubenswrapper[4937]: I0123 06:58:37.725855 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://a10facfc39aee45969b033e840bb8f688c490ef6b6d364e9ea5f238aa72a3af0" gracePeriod=600 Jan 23 06:58:38 crc kubenswrapper[4937]: I0123 06:58:38.037700 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="a10facfc39aee45969b033e840bb8f688c490ef6b6d364e9ea5f238aa72a3af0" exitCode=0 Jan 23 06:58:38 crc kubenswrapper[4937]: I0123 06:58:38.037783 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"a10facfc39aee45969b033e840bb8f688c490ef6b6d364e9ea5f238aa72a3af0"} Jan 23 06:58:38 crc kubenswrapper[4937]: I0123 06:58:38.038039 4937 scope.go:117] "RemoveContainer" containerID="2cafd2f65b886b18799192369211303c6e133b845292356bb63a572ac676cc20" Jan 23 06:58:39 crc kubenswrapper[4937]: I0123 06:58:39.049854 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a"} Jan 23 06:58:40 crc kubenswrapper[4937]: I0123 06:58:40.992484 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 06:58:42 crc kubenswrapper[4937]: I0123 06:58:42.044966 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 06:58:44 crc kubenswrapper[4937]: I0123 06:58:44.692739 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" containerName="rabbitmq" containerID="cri-o://622c0fd4f9c73a1bee1b8531d0f45bda90fe44028f07984f086365d5b29e13b1" gracePeriod=604797 Jan 23 06:58:45 crc kubenswrapper[4937]: I0123 06:58:45.486720 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="c22daa68-7c34-4180-adcc-d939bfa5a607" containerName="rabbitmq" containerID="cri-o://1150dab81040022d4c2b88d4e39c49a7006fee9c4e027739d4bb4730700b48b0" gracePeriod=604797 Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.119875 4937 generic.go:334] "Generic (PLEG): container finished" podID="c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" containerID="622c0fd4f9c73a1bee1b8531d0f45bda90fe44028f07984f086365d5b29e13b1" exitCode=0 Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.120166 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6","Type":"ContainerDied","Data":"622c0fd4f9c73a1bee1b8531d0f45bda90fe44028f07984f086365d5b29e13b1"} Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.411098 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.564325 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-pod-info\") pod \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.564391 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-confd\") pod \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.564441 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-erlang-cookie\") pod \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.564527 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.564563 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-server-conf\") pod \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.564579 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-plugins\") pod \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.564623 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-erlang-cookie-secret\") pod \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.564683 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-plugins-conf\") pod \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.564703 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-config-data\") pod \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.564848 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-tls\") pod \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.564874 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvkj5\" (UniqueName: \"kubernetes.io/projected/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-kube-api-access-hvkj5\") pod \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\" (UID: \"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6\") " Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.566540 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" (UID: "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.567116 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" (UID: "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.570888 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" (UID: "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.574064 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-kube-api-access-hvkj5" (OuterVolumeSpecName: "kube-api-access-hvkj5") pod "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" (UID: "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6"). InnerVolumeSpecName "kube-api-access-hvkj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.579860 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-pod-info" (OuterVolumeSpecName: "pod-info") pod "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" (UID: "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.582032 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" (UID: "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.583847 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" (UID: "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.587252 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" (UID: "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.608677 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-config-data" (OuterVolumeSpecName: "config-data") pod "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" (UID: "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.667845 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-server-conf" (OuterVolumeSpecName: "server-conf") pod "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" (UID: "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.669147 4937 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.669178 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvkj5\" (UniqueName: \"kubernetes.io/projected/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-kube-api-access-hvkj5\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.669188 4937 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.669196 4937 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.669217 4937 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.669225 4937 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.669233 4937 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.669242 4937 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.669251 4937 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.669260 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.714060 4937 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.755218 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" (UID: "c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.771906 4937 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:46 crc kubenswrapper[4937]: I0123 06:58:46.771947 4937 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.143993 4937 generic.go:334] "Generic (PLEG): container finished" podID="c22daa68-7c34-4180-adcc-d939bfa5a607" containerID="1150dab81040022d4c2b88d4e39c49a7006fee9c4e027739d4bb4730700b48b0" exitCode=0 Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.144305 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c22daa68-7c34-4180-adcc-d939bfa5a607","Type":"ContainerDied","Data":"1150dab81040022d4c2b88d4e39c49a7006fee9c4e027739d4bb4730700b48b0"} Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.144331 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c22daa68-7c34-4180-adcc-d939bfa5a607","Type":"ContainerDied","Data":"35f6874d7aeb8b91ed1191607f1917e454c9bdda2d998b2b3b22f1efffeeaf70"} Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.144340 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35f6874d7aeb8b91ed1191607f1917e454c9bdda2d998b2b3b22f1efffeeaf70" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.146734 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6","Type":"ContainerDied","Data":"d9ace14a6134c1dc077ceeaab231eb84e4c39cf1a83e76b15f050bb96fb8da56"} Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.146791 4937 scope.go:117] "RemoveContainer" containerID="622c0fd4f9c73a1bee1b8531d0f45bda90fe44028f07984f086365d5b29e13b1" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.146915 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.244280 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.254701 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.266967 4937 scope.go:117] "RemoveContainer" containerID="ed4d6735eacad8b5669c077204ba76ee680315c08659282ac0774ec434cbe540" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.277249 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.291619 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 06:58:47 crc kubenswrapper[4937]: E0123 06:58:47.292006 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" containerName="rabbitmq" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.292024 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" containerName="rabbitmq" Jan 23 06:58:47 crc kubenswrapper[4937]: E0123 06:58:47.292044 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22daa68-7c34-4180-adcc-d939bfa5a607" containerName="setup-container" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.292052 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22daa68-7c34-4180-adcc-d939bfa5a607" containerName="setup-container" Jan 23 06:58:47 crc kubenswrapper[4937]: E0123 06:58:47.292067 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" containerName="setup-container" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.292075 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" containerName="setup-container" Jan 23 06:58:47 crc kubenswrapper[4937]: E0123 06:58:47.292090 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22daa68-7c34-4180-adcc-d939bfa5a607" containerName="rabbitmq" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.292096 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22daa68-7c34-4180-adcc-d939bfa5a607" containerName="rabbitmq" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.292257 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" containerName="rabbitmq" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.292282 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="c22daa68-7c34-4180-adcc-d939bfa5a607" containerName="rabbitmq" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.293244 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.296005 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.296169 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.296409 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.296556 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.296792 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.298863 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-rzll5" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.299449 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.330003 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.411266 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-erlang-cookie\") pod \"c22daa68-7c34-4180-adcc-d939bfa5a607\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.411338 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c22daa68-7c34-4180-adcc-d939bfa5a607-config-data\") pod \"c22daa68-7c34-4180-adcc-d939bfa5a607\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.411381 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c22daa68-7c34-4180-adcc-d939bfa5a607-pod-info\") pod \"c22daa68-7c34-4180-adcc-d939bfa5a607\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.411399 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c22daa68-7c34-4180-adcc-d939bfa5a607-server-conf\") pod \"c22daa68-7c34-4180-adcc-d939bfa5a607\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.411425 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-plugins\") pod \"c22daa68-7c34-4180-adcc-d939bfa5a607\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.411506 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c22daa68-7c34-4180-adcc-d939bfa5a607-erlang-cookie-secret\") pod \"c22daa68-7c34-4180-adcc-d939bfa5a607\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.411567 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"c22daa68-7c34-4180-adcc-d939bfa5a607\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.411669 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg4dh\" (UniqueName: \"kubernetes.io/projected/c22daa68-7c34-4180-adcc-d939bfa5a607-kube-api-access-fg4dh\") pod \"c22daa68-7c34-4180-adcc-d939bfa5a607\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.411704 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c22daa68-7c34-4180-adcc-d939bfa5a607-plugins-conf\") pod \"c22daa68-7c34-4180-adcc-d939bfa5a607\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.411724 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-confd\") pod \"c22daa68-7c34-4180-adcc-d939bfa5a607\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.411777 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-tls\") pod \"c22daa68-7c34-4180-adcc-d939bfa5a607\" (UID: \"c22daa68-7c34-4180-adcc-d939bfa5a607\") " Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.411992 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9dea64b9-318c-40db-8f2f-bd0e18587ddf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.412027 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9dea64b9-318c-40db-8f2f-bd0e18587ddf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.412064 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9dea64b9-318c-40db-8f2f-bd0e18587ddf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.412094 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9dea64b9-318c-40db-8f2f-bd0e18587ddf-config-data\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.412161 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gblv\" (UniqueName: \"kubernetes.io/projected/9dea64b9-318c-40db-8f2f-bd0e18587ddf-kube-api-access-4gblv\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.412181 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9dea64b9-318c-40db-8f2f-bd0e18587ddf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.412202 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.412227 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9dea64b9-318c-40db-8f2f-bd0e18587ddf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.412260 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9dea64b9-318c-40db-8f2f-bd0e18587ddf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.412305 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9dea64b9-318c-40db-8f2f-bd0e18587ddf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.412330 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9dea64b9-318c-40db-8f2f-bd0e18587ddf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.418930 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c22daa68-7c34-4180-adcc-d939bfa5a607-pod-info" (OuterVolumeSpecName: "pod-info") pod "c22daa68-7c34-4180-adcc-d939bfa5a607" (UID: "c22daa68-7c34-4180-adcc-d939bfa5a607"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.419965 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c22daa68-7c34-4180-adcc-d939bfa5a607" (UID: "c22daa68-7c34-4180-adcc-d939bfa5a607"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.419983 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c22daa68-7c34-4180-adcc-d939bfa5a607" (UID: "c22daa68-7c34-4180-adcc-d939bfa5a607"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.422485 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22daa68-7c34-4180-adcc-d939bfa5a607-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c22daa68-7c34-4180-adcc-d939bfa5a607" (UID: "c22daa68-7c34-4180-adcc-d939bfa5a607"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.423153 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c22daa68-7c34-4180-adcc-d939bfa5a607-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c22daa68-7c34-4180-adcc-d939bfa5a607" (UID: "c22daa68-7c34-4180-adcc-d939bfa5a607"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.438866 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c22daa68-7c34-4180-adcc-d939bfa5a607" (UID: "c22daa68-7c34-4180-adcc-d939bfa5a607"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.450620 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "c22daa68-7c34-4180-adcc-d939bfa5a607" (UID: "c22daa68-7c34-4180-adcc-d939bfa5a607"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.465719 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c22daa68-7c34-4180-adcc-d939bfa5a607-kube-api-access-fg4dh" (OuterVolumeSpecName: "kube-api-access-fg4dh") pod "c22daa68-7c34-4180-adcc-d939bfa5a607" (UID: "c22daa68-7c34-4180-adcc-d939bfa5a607"). InnerVolumeSpecName "kube-api-access-fg4dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.486392 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c22daa68-7c34-4180-adcc-d939bfa5a607-config-data" (OuterVolumeSpecName: "config-data") pod "c22daa68-7c34-4180-adcc-d939bfa5a607" (UID: "c22daa68-7c34-4180-adcc-d939bfa5a607"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.517823 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9dea64b9-318c-40db-8f2f-bd0e18587ddf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.517917 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9dea64b9-318c-40db-8f2f-bd0e18587ddf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.517955 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9dea64b9-318c-40db-8f2f-bd0e18587ddf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.517979 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9dea64b9-318c-40db-8f2f-bd0e18587ddf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.518011 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9dea64b9-318c-40db-8f2f-bd0e18587ddf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.518048 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9dea64b9-318c-40db-8f2f-bd0e18587ddf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.518080 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9dea64b9-318c-40db-8f2f-bd0e18587ddf-config-data\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.518165 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gblv\" (UniqueName: \"kubernetes.io/projected/9dea64b9-318c-40db-8f2f-bd0e18587ddf-kube-api-access-4gblv\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.518195 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9dea64b9-318c-40db-8f2f-bd0e18587ddf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.518223 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.518250 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9dea64b9-318c-40db-8f2f-bd0e18587ddf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.518353 4937 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.518370 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg4dh\" (UniqueName: \"kubernetes.io/projected/c22daa68-7c34-4180-adcc-d939bfa5a607-kube-api-access-fg4dh\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.518385 4937 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c22daa68-7c34-4180-adcc-d939bfa5a607-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.518396 4937 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.518408 4937 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.518420 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c22daa68-7c34-4180-adcc-d939bfa5a607-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.518431 4937 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c22daa68-7c34-4180-adcc-d939bfa5a607-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.518442 4937 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.518453 4937 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c22daa68-7c34-4180-adcc-d939bfa5a607-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.521195 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9dea64b9-318c-40db-8f2f-bd0e18587ddf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.522783 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.523718 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9dea64b9-318c-40db-8f2f-bd0e18587ddf-config-data\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.524135 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9dea64b9-318c-40db-8f2f-bd0e18587ddf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.529345 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9dea64b9-318c-40db-8f2f-bd0e18587ddf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.532718 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9dea64b9-318c-40db-8f2f-bd0e18587ddf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.533359 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9dea64b9-318c-40db-8f2f-bd0e18587ddf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.534108 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9dea64b9-318c-40db-8f2f-bd0e18587ddf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.558741 4937 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.558943 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c22daa68-7c34-4180-adcc-d939bfa5a607-server-conf" (OuterVolumeSpecName: "server-conf") pod "c22daa68-7c34-4180-adcc-d939bfa5a607" (UID: "c22daa68-7c34-4180-adcc-d939bfa5a607"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.575330 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9dea64b9-318c-40db-8f2f-bd0e18587ddf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.575575 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9dea64b9-318c-40db-8f2f-bd0e18587ddf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.577172 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.578139 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gblv\" (UniqueName: \"kubernetes.io/projected/9dea64b9-318c-40db-8f2f-bd0e18587ddf-kube-api-access-4gblv\") pod \"rabbitmq-server-0\" (UID: \"9dea64b9-318c-40db-8f2f-bd0e18587ddf\") " pod="openstack/rabbitmq-server-0" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.616070 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c22daa68-7c34-4180-adcc-d939bfa5a607" (UID: "c22daa68-7c34-4180-adcc-d939bfa5a607"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.621278 4937 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.621337 4937 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c22daa68-7c34-4180-adcc-d939bfa5a607-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.621352 4937 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c22daa68-7c34-4180-adcc-d939bfa5a607-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 06:58:47 crc kubenswrapper[4937]: I0123 06:58:47.663037 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.159339 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.197507 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.207564 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.233490 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.260833 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.262631 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.266471 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.266557 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.266680 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.266769 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.266851 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.267606 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-r4ghn" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.267626 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.300045 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.336266 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.336345 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/17f47dee-5fcc-4198-ae4c-85b851ae5b20-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.336375 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mvn7\" (UniqueName: \"kubernetes.io/projected/17f47dee-5fcc-4198-ae4c-85b851ae5b20-kube-api-access-7mvn7\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.336411 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17f47dee-5fcc-4198-ae4c-85b851ae5b20-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.336431 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/17f47dee-5fcc-4198-ae4c-85b851ae5b20-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.336447 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/17f47dee-5fcc-4198-ae4c-85b851ae5b20-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.336474 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/17f47dee-5fcc-4198-ae4c-85b851ae5b20-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.336502 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/17f47dee-5fcc-4198-ae4c-85b851ae5b20-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.336530 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/17f47dee-5fcc-4198-ae4c-85b851ae5b20-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.336750 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/17f47dee-5fcc-4198-ae4c-85b851ae5b20-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.336769 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/17f47dee-5fcc-4198-ae4c-85b851ae5b20-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.438396 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/17f47dee-5fcc-4198-ae4c-85b851ae5b20-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.438452 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/17f47dee-5fcc-4198-ae4c-85b851ae5b20-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.438490 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.438560 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/17f47dee-5fcc-4198-ae4c-85b851ae5b20-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.438608 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mvn7\" (UniqueName: \"kubernetes.io/projected/17f47dee-5fcc-4198-ae4c-85b851ae5b20-kube-api-access-7mvn7\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.438651 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17f47dee-5fcc-4198-ae4c-85b851ae5b20-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.438680 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/17f47dee-5fcc-4198-ae4c-85b851ae5b20-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.438703 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/17f47dee-5fcc-4198-ae4c-85b851ae5b20-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.438740 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/17f47dee-5fcc-4198-ae4c-85b851ae5b20-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.438778 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/17f47dee-5fcc-4198-ae4c-85b851ae5b20-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.438815 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/17f47dee-5fcc-4198-ae4c-85b851ae5b20-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.439317 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/17f47dee-5fcc-4198-ae4c-85b851ae5b20-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.439925 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/17f47dee-5fcc-4198-ae4c-85b851ae5b20-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.440142 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/17f47dee-5fcc-4198-ae4c-85b851ae5b20-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.440232 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.441038 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/17f47dee-5fcc-4198-ae4c-85b851ae5b20-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.441780 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/17f47dee-5fcc-4198-ae4c-85b851ae5b20-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.443644 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/17f47dee-5fcc-4198-ae4c-85b851ae5b20-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.444044 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/17f47dee-5fcc-4198-ae4c-85b851ae5b20-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.454197 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/17f47dee-5fcc-4198-ae4c-85b851ae5b20-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.458852 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/17f47dee-5fcc-4198-ae4c-85b851ae5b20-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.469258 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mvn7\" (UniqueName: \"kubernetes.io/projected/17f47dee-5fcc-4198-ae4c-85b851ae5b20-kube-api-access-7mvn7\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.505315 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"17f47dee-5fcc-4198-ae4c-85b851ae5b20\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.537863 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6" path="/var/lib/kubelet/pods/c08dcfd4-6838-45a1-bfc5-2cc5dcb079f6/volumes" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.538641 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c22daa68-7c34-4180-adcc-d939bfa5a607" path="/var/lib/kubelet/pods/c22daa68-7c34-4180-adcc-d939bfa5a607/volumes" Jan 23 06:58:48 crc kubenswrapper[4937]: I0123 06:58:48.647810 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:58:49 crc kubenswrapper[4937]: I0123 06:58:49.161160 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 06:58:49 crc kubenswrapper[4937]: I0123 06:58:49.175028 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9dea64b9-318c-40db-8f2f-bd0e18587ddf","Type":"ContainerStarted","Data":"a7fcf4bd6aada5ed8f5a6654d177b917684036a3b47894647e374222af9c6358"} Jan 23 06:58:50 crc kubenswrapper[4937]: I0123 06:58:50.187938 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9dea64b9-318c-40db-8f2f-bd0e18587ddf","Type":"ContainerStarted","Data":"9d64d26f214f49cc69140119c30fa271846bd18080fc027229ec182b1a2cc4c1"} Jan 23 06:58:50 crc kubenswrapper[4937]: I0123 06:58:50.192301 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"17f47dee-5fcc-4198-ae4c-85b851ae5b20","Type":"ContainerStarted","Data":"45b92b477e524ff6c975a39e6cd3d6c3c6f062682ce081f004decc5531906401"} Jan 23 06:58:51 crc kubenswrapper[4937]: I0123 06:58:51.202750 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"17f47dee-5fcc-4198-ae4c-85b851ae5b20","Type":"ContainerStarted","Data":"c9fb93f36ded164b26e909c6d5658ee2f4bb5d12d96f320bad1b280cf80ae955"} Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.489520 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6df8b6c465-qv2w5"] Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.492309 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.497578 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.512483 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6df8b6c465-qv2w5"] Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.562484 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-ovsdbserver-nb\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.562536 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-ovsdbserver-sb\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.562709 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tg2m\" (UniqueName: \"kubernetes.io/projected/9897a2c6-f645-4c82-9c7b-f9622994911b-kube-api-access-4tg2m\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.562931 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-dns-swift-storage-0\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.562978 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-config\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.563054 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-openstack-edpm-ipam\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.563124 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-dns-svc\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.665553 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-config\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.665624 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-openstack-edpm-ipam\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.665670 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-dns-svc\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.665784 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-ovsdbserver-nb\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.665815 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-ovsdbserver-sb\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.665860 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tg2m\" (UniqueName: \"kubernetes.io/projected/9897a2c6-f645-4c82-9c7b-f9622994911b-kube-api-access-4tg2m\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.665925 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-dns-swift-storage-0\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.666496 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-openstack-edpm-ipam\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.666688 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-dns-svc\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.666818 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-ovsdbserver-nb\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.666830 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-config\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.666984 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-dns-swift-storage-0\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.666988 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-ovsdbserver-sb\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.682894 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tg2m\" (UniqueName: \"kubernetes.io/projected/9897a2c6-f645-4c82-9c7b-f9622994911b-kube-api-access-4tg2m\") pod \"dnsmasq-dns-6df8b6c465-qv2w5\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:54 crc kubenswrapper[4937]: I0123 06:58:54.818541 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:55 crc kubenswrapper[4937]: I0123 06:58:55.417917 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6df8b6c465-qv2w5"] Jan 23 06:58:56 crc kubenswrapper[4937]: I0123 06:58:56.248733 4937 generic.go:334] "Generic (PLEG): container finished" podID="9897a2c6-f645-4c82-9c7b-f9622994911b" containerID="b2ac9251cb4689e00ea4571a80bbf4605ccdc96eb2673ad455d8ae43b8b34225" exitCode=0 Jan 23 06:58:56 crc kubenswrapper[4937]: I0123 06:58:56.248872 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" event={"ID":"9897a2c6-f645-4c82-9c7b-f9622994911b","Type":"ContainerDied","Data":"b2ac9251cb4689e00ea4571a80bbf4605ccdc96eb2673ad455d8ae43b8b34225"} Jan 23 06:58:56 crc kubenswrapper[4937]: I0123 06:58:56.249018 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" event={"ID":"9897a2c6-f645-4c82-9c7b-f9622994911b","Type":"ContainerStarted","Data":"6eafd36d62db8d2c9dc1a4f0d830db57e2253555980a414b90c00e91c66eb6d9"} Jan 23 06:58:57 crc kubenswrapper[4937]: I0123 06:58:57.265995 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" event={"ID":"9897a2c6-f645-4c82-9c7b-f9622994911b","Type":"ContainerStarted","Data":"5351f6606d5ace0da7751103ac0ed279f2a96494421c127d6f67173e565ac985"} Jan 23 06:58:57 crc kubenswrapper[4937]: I0123 06:58:57.266577 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:58:57 crc kubenswrapper[4937]: I0123 06:58:57.291462 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" podStartSLOduration=3.291441713 podStartE2EDuration="3.291441713s" podCreationTimestamp="2026-01-23 06:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:58:57.284120374 +0000 UTC m=+1537.087887037" watchObservedRunningTime="2026-01-23 06:58:57.291441713 +0000 UTC m=+1537.095208366" Jan 23 06:59:04 crc kubenswrapper[4937]: I0123 06:59:04.819884 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:59:04 crc kubenswrapper[4937]: I0123 06:59:04.837461 4937 scope.go:117] "RemoveContainer" containerID="1061c71e90e3c339192a8927fc1c04e92e12d63bd856ba64167558bd0307450c" Jan 23 06:59:04 crc kubenswrapper[4937]: I0123 06:59:04.886652 4937 scope.go:117] "RemoveContainer" containerID="c78dd2c4ac4ca5173544ae9c6389a8cb3fa42c87c1ef8b31a3fa7f3b2491ab8f" Jan 23 06:59:04 crc kubenswrapper[4937]: I0123 06:59:04.924817 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68b68f87bf-ggngd"] Jan 23 06:59:04 crc kubenswrapper[4937]: I0123 06:59:04.925456 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" podUID="242fc90c-f61c-4f85-935a-17e7f81e2026" containerName="dnsmasq-dns" containerID="cri-o://b3a70a92824af4310b467ee519fdf58a87504d9400ca7494962bd9e3541b3d84" gracePeriod=10 Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.135430 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86f4b56d69-9jng8"] Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.137225 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.181181 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86f4b56d69-9jng8"] Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.302634 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed385bc-7605-4d39-916e-7afd43801da8-config\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.302795 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fed385bc-7605-4d39-916e-7afd43801da8-dns-swift-storage-0\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.302993 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrzv9\" (UniqueName: \"kubernetes.io/projected/fed385bc-7605-4d39-916e-7afd43801da8-kube-api-access-mrzv9\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.303034 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fed385bc-7605-4d39-916e-7afd43801da8-openstack-edpm-ipam\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.303064 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fed385bc-7605-4d39-916e-7afd43801da8-ovsdbserver-nb\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.303096 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fed385bc-7605-4d39-916e-7afd43801da8-ovsdbserver-sb\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.303133 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fed385bc-7605-4d39-916e-7afd43801da8-dns-svc\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.357918 4937 generic.go:334] "Generic (PLEG): container finished" podID="242fc90c-f61c-4f85-935a-17e7f81e2026" containerID="b3a70a92824af4310b467ee519fdf58a87504d9400ca7494962bd9e3541b3d84" exitCode=0 Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.357992 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" event={"ID":"242fc90c-f61c-4f85-935a-17e7f81e2026","Type":"ContainerDied","Data":"b3a70a92824af4310b467ee519fdf58a87504d9400ca7494962bd9e3541b3d84"} Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.405881 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed385bc-7605-4d39-916e-7afd43801da8-config\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.406142 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fed385bc-7605-4d39-916e-7afd43801da8-dns-swift-storage-0\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.406327 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrzv9\" (UniqueName: \"kubernetes.io/projected/fed385bc-7605-4d39-916e-7afd43801da8-kube-api-access-mrzv9\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.406423 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fed385bc-7605-4d39-916e-7afd43801da8-openstack-edpm-ipam\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.406518 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fed385bc-7605-4d39-916e-7afd43801da8-ovsdbserver-nb\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.406692 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fed385bc-7605-4d39-916e-7afd43801da8-ovsdbserver-sb\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.406849 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fed385bc-7605-4d39-916e-7afd43801da8-dns-svc\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.407424 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fed385bc-7605-4d39-916e-7afd43801da8-openstack-edpm-ipam\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.407695 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fed385bc-7605-4d39-916e-7afd43801da8-dns-swift-storage-0\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.406788 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fed385bc-7605-4d39-916e-7afd43801da8-config\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.408217 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fed385bc-7605-4d39-916e-7afd43801da8-dns-svc\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.408380 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fed385bc-7605-4d39-916e-7afd43801da8-ovsdbserver-sb\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.408433 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fed385bc-7605-4d39-916e-7afd43801da8-ovsdbserver-nb\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.426171 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrzv9\" (UniqueName: \"kubernetes.io/projected/fed385bc-7605-4d39-916e-7afd43801da8-kube-api-access-mrzv9\") pod \"dnsmasq-dns-86f4b56d69-9jng8\" (UID: \"fed385bc-7605-4d39-916e-7afd43801da8\") " pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.459960 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.512865 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.610526 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4jcr\" (UniqueName: \"kubernetes.io/projected/242fc90c-f61c-4f85-935a-17e7f81e2026-kube-api-access-k4jcr\") pod \"242fc90c-f61c-4f85-935a-17e7f81e2026\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.610616 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-ovsdbserver-sb\") pod \"242fc90c-f61c-4f85-935a-17e7f81e2026\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.610654 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-ovsdbserver-nb\") pod \"242fc90c-f61c-4f85-935a-17e7f81e2026\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.610745 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-dns-swift-storage-0\") pod \"242fc90c-f61c-4f85-935a-17e7f81e2026\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.610853 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-dns-svc\") pod \"242fc90c-f61c-4f85-935a-17e7f81e2026\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.610925 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-config\") pod \"242fc90c-f61c-4f85-935a-17e7f81e2026\" (UID: \"242fc90c-f61c-4f85-935a-17e7f81e2026\") " Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.615506 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/242fc90c-f61c-4f85-935a-17e7f81e2026-kube-api-access-k4jcr" (OuterVolumeSpecName: "kube-api-access-k4jcr") pod "242fc90c-f61c-4f85-935a-17e7f81e2026" (UID: "242fc90c-f61c-4f85-935a-17e7f81e2026"). InnerVolumeSpecName "kube-api-access-k4jcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.657684 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-config" (OuterVolumeSpecName: "config") pod "242fc90c-f61c-4f85-935a-17e7f81e2026" (UID: "242fc90c-f61c-4f85-935a-17e7f81e2026"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.660248 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "242fc90c-f61c-4f85-935a-17e7f81e2026" (UID: "242fc90c-f61c-4f85-935a-17e7f81e2026"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.660319 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "242fc90c-f61c-4f85-935a-17e7f81e2026" (UID: "242fc90c-f61c-4f85-935a-17e7f81e2026"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.664051 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "242fc90c-f61c-4f85-935a-17e7f81e2026" (UID: "242fc90c-f61c-4f85-935a-17e7f81e2026"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.674207 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "242fc90c-f61c-4f85-935a-17e7f81e2026" (UID: "242fc90c-f61c-4f85-935a-17e7f81e2026"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.714781 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.715491 4937 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.715516 4937 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.715532 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.715546 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4jcr\" (UniqueName: \"kubernetes.io/projected/242fc90c-f61c-4f85-935a-17e7f81e2026-kube-api-access-k4jcr\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:05 crc kubenswrapper[4937]: I0123 06:59:05.715558 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/242fc90c-f61c-4f85-935a-17e7f81e2026-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:06 crc kubenswrapper[4937]: W0123 06:59:06.004323 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfed385bc_7605_4d39_916e_7afd43801da8.slice/crio-f5b2250742601146e0af2efc7e1d49972b3c3a50586d1c0b97eb14a77895b45c WatchSource:0}: Error finding container f5b2250742601146e0af2efc7e1d49972b3c3a50586d1c0b97eb14a77895b45c: Status 404 returned error can't find the container with id f5b2250742601146e0af2efc7e1d49972b3c3a50586d1c0b97eb14a77895b45c Jan 23 06:59:06 crc kubenswrapper[4937]: I0123 06:59:06.035516 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86f4b56d69-9jng8"] Jan 23 06:59:06 crc kubenswrapper[4937]: I0123 06:59:06.372449 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" event={"ID":"fed385bc-7605-4d39-916e-7afd43801da8","Type":"ContainerStarted","Data":"a8de05d4195f5b7b7ce65b05b80751a8758725f5e3588ae920707d1cf9f1d72f"} Jan 23 06:59:06 crc kubenswrapper[4937]: I0123 06:59:06.372771 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" event={"ID":"fed385bc-7605-4d39-916e-7afd43801da8","Type":"ContainerStarted","Data":"f5b2250742601146e0af2efc7e1d49972b3c3a50586d1c0b97eb14a77895b45c"} Jan 23 06:59:06 crc kubenswrapper[4937]: I0123 06:59:06.376437 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" event={"ID":"242fc90c-f61c-4f85-935a-17e7f81e2026","Type":"ContainerDied","Data":"c836640af9f91ff6225f95367c8a5733a4a4b72dfbbbee4d58232af6f46c1b91"} Jan 23 06:59:06 crc kubenswrapper[4937]: I0123 06:59:06.376480 4937 scope.go:117] "RemoveContainer" containerID="b3a70a92824af4310b467ee519fdf58a87504d9400ca7494962bd9e3541b3d84" Jan 23 06:59:06 crc kubenswrapper[4937]: I0123 06:59:06.376503 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68b68f87bf-ggngd" Jan 23 06:59:06 crc kubenswrapper[4937]: I0123 06:59:06.479378 4937 scope.go:117] "RemoveContainer" containerID="978f10f3e93a87622b35537b7c10b347089019dd359654e799b836edb69645cc" Jan 23 06:59:06 crc kubenswrapper[4937]: I0123 06:59:06.487645 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68b68f87bf-ggngd"] Jan 23 06:59:06 crc kubenswrapper[4937]: I0123 06:59:06.506008 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68b68f87bf-ggngd"] Jan 23 06:59:06 crc kubenswrapper[4937]: I0123 06:59:06.554792 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="242fc90c-f61c-4f85-935a-17e7f81e2026" path="/var/lib/kubelet/pods/242fc90c-f61c-4f85-935a-17e7f81e2026/volumes" Jan 23 06:59:07 crc kubenswrapper[4937]: I0123 06:59:07.389220 4937 generic.go:334] "Generic (PLEG): container finished" podID="fed385bc-7605-4d39-916e-7afd43801da8" containerID="a8de05d4195f5b7b7ce65b05b80751a8758725f5e3588ae920707d1cf9f1d72f" exitCode=0 Jan 23 06:59:07 crc kubenswrapper[4937]: I0123 06:59:07.389331 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" event={"ID":"fed385bc-7605-4d39-916e-7afd43801da8","Type":"ContainerDied","Data":"a8de05d4195f5b7b7ce65b05b80751a8758725f5e3588ae920707d1cf9f1d72f"} Jan 23 06:59:08 crc kubenswrapper[4937]: I0123 06:59:08.412707 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" event={"ID":"fed385bc-7605-4d39-916e-7afd43801da8","Type":"ContainerStarted","Data":"22618adf6c6941395822022efa0498f71b6d5f476d3f905735345283a3c9648a"} Jan 23 06:59:08 crc kubenswrapper[4937]: I0123 06:59:08.413407 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:08 crc kubenswrapper[4937]: I0123 06:59:08.458459 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" podStartSLOduration=3.458428111 podStartE2EDuration="3.458428111s" podCreationTimestamp="2026-01-23 06:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:59:08.445052687 +0000 UTC m=+1548.248819400" watchObservedRunningTime="2026-01-23 06:59:08.458428111 +0000 UTC m=+1548.262194794" Jan 23 06:59:15 crc kubenswrapper[4937]: I0123 06:59:15.462573 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86f4b56d69-9jng8" Jan 23 06:59:15 crc kubenswrapper[4937]: I0123 06:59:15.528042 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6df8b6c465-qv2w5"] Jan 23 06:59:15 crc kubenswrapper[4937]: I0123 06:59:15.528557 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" podUID="9897a2c6-f645-4c82-9c7b-f9622994911b" containerName="dnsmasq-dns" containerID="cri-o://5351f6606d5ace0da7751103ac0ed279f2a96494421c127d6f67173e565ac985" gracePeriod=10 Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.053480 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.146993 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-config\") pod \"9897a2c6-f645-4c82-9c7b-f9622994911b\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.147034 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-dns-svc\") pod \"9897a2c6-f645-4c82-9c7b-f9622994911b\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.147108 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-dns-swift-storage-0\") pod \"9897a2c6-f645-4c82-9c7b-f9622994911b\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.147133 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-openstack-edpm-ipam\") pod \"9897a2c6-f645-4c82-9c7b-f9622994911b\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.147151 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-ovsdbserver-nb\") pod \"9897a2c6-f645-4c82-9c7b-f9622994911b\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.147248 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-ovsdbserver-sb\") pod \"9897a2c6-f645-4c82-9c7b-f9622994911b\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.147269 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tg2m\" (UniqueName: \"kubernetes.io/projected/9897a2c6-f645-4c82-9c7b-f9622994911b-kube-api-access-4tg2m\") pod \"9897a2c6-f645-4c82-9c7b-f9622994911b\" (UID: \"9897a2c6-f645-4c82-9c7b-f9622994911b\") " Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.194853 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9897a2c6-f645-4c82-9c7b-f9622994911b-kube-api-access-4tg2m" (OuterVolumeSpecName: "kube-api-access-4tg2m") pod "9897a2c6-f645-4c82-9c7b-f9622994911b" (UID: "9897a2c6-f645-4c82-9c7b-f9622994911b"). InnerVolumeSpecName "kube-api-access-4tg2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.224719 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9897a2c6-f645-4c82-9c7b-f9622994911b" (UID: "9897a2c6-f645-4c82-9c7b-f9622994911b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.235781 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9897a2c6-f645-4c82-9c7b-f9622994911b" (UID: "9897a2c6-f645-4c82-9c7b-f9622994911b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.237522 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9897a2c6-f645-4c82-9c7b-f9622994911b" (UID: "9897a2c6-f645-4c82-9c7b-f9622994911b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.239512 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-config" (OuterVolumeSpecName: "config") pod "9897a2c6-f645-4c82-9c7b-f9622994911b" (UID: "9897a2c6-f645-4c82-9c7b-f9622994911b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.251647 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.251683 4937 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.251694 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tg2m\" (UniqueName: \"kubernetes.io/projected/9897a2c6-f645-4c82-9c7b-f9622994911b-kube-api-access-4tg2m\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.251711 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-config\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.251721 4937 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.256172 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "9897a2c6-f645-4c82-9c7b-f9622994911b" (UID: "9897a2c6-f645-4c82-9c7b-f9622994911b"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.262669 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9897a2c6-f645-4c82-9c7b-f9622994911b" (UID: "9897a2c6-f645-4c82-9c7b-f9622994911b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.353846 4937 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.353890 4937 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9897a2c6-f645-4c82-9c7b-f9622994911b-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.530144 4937 generic.go:334] "Generic (PLEG): container finished" podID="9897a2c6-f645-4c82-9c7b-f9622994911b" containerID="5351f6606d5ace0da7751103ac0ed279f2a96494421c127d6f67173e565ac985" exitCode=0 Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.530235 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.541182 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" event={"ID":"9897a2c6-f645-4c82-9c7b-f9622994911b","Type":"ContainerDied","Data":"5351f6606d5ace0da7751103ac0ed279f2a96494421c127d6f67173e565ac985"} Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.541229 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6df8b6c465-qv2w5" event={"ID":"9897a2c6-f645-4c82-9c7b-f9622994911b","Type":"ContainerDied","Data":"6eafd36d62db8d2c9dc1a4f0d830db57e2253555980a414b90c00e91c66eb6d9"} Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.541247 4937 scope.go:117] "RemoveContainer" containerID="5351f6606d5ace0da7751103ac0ed279f2a96494421c127d6f67173e565ac985" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.578693 4937 scope.go:117] "RemoveContainer" containerID="b2ac9251cb4689e00ea4571a80bbf4605ccdc96eb2673ad455d8ae43b8b34225" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.586931 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6df8b6c465-qv2w5"] Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.596108 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6df8b6c465-qv2w5"] Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.609070 4937 scope.go:117] "RemoveContainer" containerID="5351f6606d5ace0da7751103ac0ed279f2a96494421c127d6f67173e565ac985" Jan 23 06:59:16 crc kubenswrapper[4937]: E0123 06:59:16.609562 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5351f6606d5ace0da7751103ac0ed279f2a96494421c127d6f67173e565ac985\": container with ID starting with 5351f6606d5ace0da7751103ac0ed279f2a96494421c127d6f67173e565ac985 not found: ID does not exist" containerID="5351f6606d5ace0da7751103ac0ed279f2a96494421c127d6f67173e565ac985" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.609623 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5351f6606d5ace0da7751103ac0ed279f2a96494421c127d6f67173e565ac985"} err="failed to get container status \"5351f6606d5ace0da7751103ac0ed279f2a96494421c127d6f67173e565ac985\": rpc error: code = NotFound desc = could not find container \"5351f6606d5ace0da7751103ac0ed279f2a96494421c127d6f67173e565ac985\": container with ID starting with 5351f6606d5ace0da7751103ac0ed279f2a96494421c127d6f67173e565ac985 not found: ID does not exist" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.609666 4937 scope.go:117] "RemoveContainer" containerID="b2ac9251cb4689e00ea4571a80bbf4605ccdc96eb2673ad455d8ae43b8b34225" Jan 23 06:59:16 crc kubenswrapper[4937]: E0123 06:59:16.610032 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2ac9251cb4689e00ea4571a80bbf4605ccdc96eb2673ad455d8ae43b8b34225\": container with ID starting with b2ac9251cb4689e00ea4571a80bbf4605ccdc96eb2673ad455d8ae43b8b34225 not found: ID does not exist" containerID="b2ac9251cb4689e00ea4571a80bbf4605ccdc96eb2673ad455d8ae43b8b34225" Jan 23 06:59:16 crc kubenswrapper[4937]: I0123 06:59:16.610058 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2ac9251cb4689e00ea4571a80bbf4605ccdc96eb2673ad455d8ae43b8b34225"} err="failed to get container status \"b2ac9251cb4689e00ea4571a80bbf4605ccdc96eb2673ad455d8ae43b8b34225\": rpc error: code = NotFound desc = could not find container \"b2ac9251cb4689e00ea4571a80bbf4605ccdc96eb2673ad455d8ae43b8b34225\": container with ID starting with b2ac9251cb4689e00ea4571a80bbf4605ccdc96eb2673ad455d8ae43b8b34225 not found: ID does not exist" Jan 23 06:59:18 crc kubenswrapper[4937]: I0123 06:59:18.974575 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9897a2c6-f645-4c82-9c7b-f9622994911b" path="/var/lib/kubelet/pods/9897a2c6-f645-4c82-9c7b-f9622994911b/volumes" Jan 23 06:59:22 crc kubenswrapper[4937]: I0123 06:59:22.000181 4937 generic.go:334] "Generic (PLEG): container finished" podID="9dea64b9-318c-40db-8f2f-bd0e18587ddf" containerID="9d64d26f214f49cc69140119c30fa271846bd18080fc027229ec182b1a2cc4c1" exitCode=0 Jan 23 06:59:22 crc kubenswrapper[4937]: I0123 06:59:22.000269 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9dea64b9-318c-40db-8f2f-bd0e18587ddf","Type":"ContainerDied","Data":"9d64d26f214f49cc69140119c30fa271846bd18080fc027229ec182b1a2cc4c1"} Jan 23 06:59:23 crc kubenswrapper[4937]: I0123 06:59:23.012915 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9dea64b9-318c-40db-8f2f-bd0e18587ddf","Type":"ContainerStarted","Data":"0423e0ef16822f20e9da1e334836ee7224d6bf8c0ae33a30d1235cdc1d1dd2f8"} Jan 23 06:59:23 crc kubenswrapper[4937]: I0123 06:59:23.013502 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 06:59:23 crc kubenswrapper[4937]: I0123 06:59:23.041814 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.041792148 podStartE2EDuration="36.041792148s" podCreationTimestamp="2026-01-23 06:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:59:23.032304209 +0000 UTC m=+1562.836070882" watchObservedRunningTime="2026-01-23 06:59:23.041792148 +0000 UTC m=+1562.845558801" Jan 23 06:59:24 crc kubenswrapper[4937]: I0123 06:59:24.023108 4937 generic.go:334] "Generic (PLEG): container finished" podID="17f47dee-5fcc-4198-ae4c-85b851ae5b20" containerID="c9fb93f36ded164b26e909c6d5658ee2f4bb5d12d96f320bad1b280cf80ae955" exitCode=0 Jan 23 06:59:24 crc kubenswrapper[4937]: I0123 06:59:24.024376 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"17f47dee-5fcc-4198-ae4c-85b851ae5b20","Type":"ContainerDied","Data":"c9fb93f36ded164b26e909c6d5658ee2f4bb5d12d96f320bad1b280cf80ae955"} Jan 23 06:59:25 crc kubenswrapper[4937]: I0123 06:59:25.037865 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"17f47dee-5fcc-4198-ae4c-85b851ae5b20","Type":"ContainerStarted","Data":"2ca8bee419552d408fe8211f40f2d7135e5ff1f7eec1d5e2385fb76d5d459970"} Jan 23 06:59:25 crc kubenswrapper[4937]: I0123 06:59:25.038721 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:59:25 crc kubenswrapper[4937]: I0123 06:59:25.066845 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.066825379 podStartE2EDuration="37.066825379s" podCreationTimestamp="2026-01-23 06:58:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 06:59:25.060412935 +0000 UTC m=+1564.864179588" watchObservedRunningTime="2026-01-23 06:59:25.066825379 +0000 UTC m=+1564.870592032" Jan 23 06:59:28 crc kubenswrapper[4937]: I0123 06:59:28.983866 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz"] Jan 23 06:59:28 crc kubenswrapper[4937]: E0123 06:59:28.984921 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="242fc90c-f61c-4f85-935a-17e7f81e2026" containerName="init" Jan 23 06:59:28 crc kubenswrapper[4937]: I0123 06:59:28.984937 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="242fc90c-f61c-4f85-935a-17e7f81e2026" containerName="init" Jan 23 06:59:28 crc kubenswrapper[4937]: E0123 06:59:28.984964 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="242fc90c-f61c-4f85-935a-17e7f81e2026" containerName="dnsmasq-dns" Jan 23 06:59:28 crc kubenswrapper[4937]: I0123 06:59:28.984974 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="242fc90c-f61c-4f85-935a-17e7f81e2026" containerName="dnsmasq-dns" Jan 23 06:59:28 crc kubenswrapper[4937]: E0123 06:59:28.985013 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9897a2c6-f645-4c82-9c7b-f9622994911b" containerName="dnsmasq-dns" Jan 23 06:59:28 crc kubenswrapper[4937]: I0123 06:59:28.985022 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="9897a2c6-f645-4c82-9c7b-f9622994911b" containerName="dnsmasq-dns" Jan 23 06:59:28 crc kubenswrapper[4937]: E0123 06:59:28.985049 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9897a2c6-f645-4c82-9c7b-f9622994911b" containerName="init" Jan 23 06:59:28 crc kubenswrapper[4937]: I0123 06:59:28.985058 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="9897a2c6-f645-4c82-9c7b-f9622994911b" containerName="init" Jan 23 06:59:28 crc kubenswrapper[4937]: I0123 06:59:28.985313 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="242fc90c-f61c-4f85-935a-17e7f81e2026" containerName="dnsmasq-dns" Jan 23 06:59:28 crc kubenswrapper[4937]: I0123 06:59:28.985339 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="9897a2c6-f645-4c82-9c7b-f9622994911b" containerName="dnsmasq-dns" Jan 23 06:59:28 crc kubenswrapper[4937]: I0123 06:59:28.986185 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" Jan 23 06:59:28 crc kubenswrapper[4937]: I0123 06:59:28.989136 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 06:59:28 crc kubenswrapper[4937]: I0123 06:59:28.989262 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 06:59:28 crc kubenswrapper[4937]: I0123 06:59:28.989478 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 06:59:28 crc kubenswrapper[4937]: I0123 06:59:28.990019 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rm45q" Jan 23 06:59:29 crc kubenswrapper[4937]: I0123 06:59:29.002458 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz"] Jan 23 06:59:29 crc kubenswrapper[4937]: I0123 06:59:29.108631 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c55nr\" (UniqueName: \"kubernetes.io/projected/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-kube-api-access-c55nr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz\" (UID: \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" Jan 23 06:59:29 crc kubenswrapper[4937]: I0123 06:59:29.108741 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz\" (UID: \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" Jan 23 06:59:29 crc kubenswrapper[4937]: I0123 06:59:29.108780 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz\" (UID: \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" Jan 23 06:59:29 crc kubenswrapper[4937]: I0123 06:59:29.108838 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz\" (UID: \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" Jan 23 06:59:29 crc kubenswrapper[4937]: I0123 06:59:29.210453 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c55nr\" (UniqueName: \"kubernetes.io/projected/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-kube-api-access-c55nr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz\" (UID: \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" Jan 23 06:59:29 crc kubenswrapper[4937]: I0123 06:59:29.210585 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz\" (UID: \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" Jan 23 06:59:29 crc kubenswrapper[4937]: I0123 06:59:29.210687 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz\" (UID: \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" Jan 23 06:59:29 crc kubenswrapper[4937]: I0123 06:59:29.210754 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz\" (UID: \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" Jan 23 06:59:29 crc kubenswrapper[4937]: I0123 06:59:29.217824 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz\" (UID: \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" Jan 23 06:59:29 crc kubenswrapper[4937]: I0123 06:59:29.218709 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz\" (UID: \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" Jan 23 06:59:29 crc kubenswrapper[4937]: I0123 06:59:29.225036 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz\" (UID: \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" Jan 23 06:59:29 crc kubenswrapper[4937]: I0123 06:59:29.231249 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c55nr\" (UniqueName: \"kubernetes.io/projected/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-kube-api-access-c55nr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz\" (UID: \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" Jan 23 06:59:29 crc kubenswrapper[4937]: I0123 06:59:29.305685 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" Jan 23 06:59:29 crc kubenswrapper[4937]: I0123 06:59:29.869170 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz"] Jan 23 06:59:30 crc kubenswrapper[4937]: I0123 06:59:30.105457 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" event={"ID":"62611df4-293b-4aea-8a00-cbcfc2ffdfaf","Type":"ContainerStarted","Data":"e22b6f89fa5027c4300a7f06791eeafe29423d1a4ef5f7bd4bb5368d95478ce7"} Jan 23 06:59:37 crc kubenswrapper[4937]: I0123 06:59:37.668859 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 06:59:38 crc kubenswrapper[4937]: I0123 06:59:38.178778 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5ch6l"] Jan 23 06:59:38 crc kubenswrapper[4937]: I0123 06:59:38.181222 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5ch6l" Jan 23 06:59:38 crc kubenswrapper[4937]: I0123 06:59:38.197346 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ch6l"] Jan 23 06:59:38 crc kubenswrapper[4937]: I0123 06:59:38.295950 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/471611d1-57b0-4b13-9f95-5c3505ce0aff-utilities\") pod \"redhat-marketplace-5ch6l\" (UID: \"471611d1-57b0-4b13-9f95-5c3505ce0aff\") " pod="openshift-marketplace/redhat-marketplace-5ch6l" Jan 23 06:59:38 crc kubenswrapper[4937]: I0123 06:59:38.296448 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mllz2\" (UniqueName: \"kubernetes.io/projected/471611d1-57b0-4b13-9f95-5c3505ce0aff-kube-api-access-mllz2\") pod \"redhat-marketplace-5ch6l\" (UID: \"471611d1-57b0-4b13-9f95-5c3505ce0aff\") " pod="openshift-marketplace/redhat-marketplace-5ch6l" Jan 23 06:59:38 crc kubenswrapper[4937]: I0123 06:59:38.296623 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/471611d1-57b0-4b13-9f95-5c3505ce0aff-catalog-content\") pod \"redhat-marketplace-5ch6l\" (UID: \"471611d1-57b0-4b13-9f95-5c3505ce0aff\") " pod="openshift-marketplace/redhat-marketplace-5ch6l" Jan 23 06:59:38 crc kubenswrapper[4937]: I0123 06:59:38.399172 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/471611d1-57b0-4b13-9f95-5c3505ce0aff-utilities\") pod \"redhat-marketplace-5ch6l\" (UID: \"471611d1-57b0-4b13-9f95-5c3505ce0aff\") " pod="openshift-marketplace/redhat-marketplace-5ch6l" Jan 23 06:59:38 crc kubenswrapper[4937]: I0123 06:59:38.399313 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mllz2\" (UniqueName: \"kubernetes.io/projected/471611d1-57b0-4b13-9f95-5c3505ce0aff-kube-api-access-mllz2\") pod \"redhat-marketplace-5ch6l\" (UID: \"471611d1-57b0-4b13-9f95-5c3505ce0aff\") " pod="openshift-marketplace/redhat-marketplace-5ch6l" Jan 23 06:59:38 crc kubenswrapper[4937]: I0123 06:59:38.399368 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/471611d1-57b0-4b13-9f95-5c3505ce0aff-catalog-content\") pod \"redhat-marketplace-5ch6l\" (UID: \"471611d1-57b0-4b13-9f95-5c3505ce0aff\") " pod="openshift-marketplace/redhat-marketplace-5ch6l" Jan 23 06:59:38 crc kubenswrapper[4937]: I0123 06:59:38.399739 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/471611d1-57b0-4b13-9f95-5c3505ce0aff-utilities\") pod \"redhat-marketplace-5ch6l\" (UID: \"471611d1-57b0-4b13-9f95-5c3505ce0aff\") " pod="openshift-marketplace/redhat-marketplace-5ch6l" Jan 23 06:59:38 crc kubenswrapper[4937]: I0123 06:59:38.399788 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/471611d1-57b0-4b13-9f95-5c3505ce0aff-catalog-content\") pod \"redhat-marketplace-5ch6l\" (UID: \"471611d1-57b0-4b13-9f95-5c3505ce0aff\") " pod="openshift-marketplace/redhat-marketplace-5ch6l" Jan 23 06:59:38 crc kubenswrapper[4937]: I0123 06:59:38.426208 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mllz2\" (UniqueName: \"kubernetes.io/projected/471611d1-57b0-4b13-9f95-5c3505ce0aff-kube-api-access-mllz2\") pod \"redhat-marketplace-5ch6l\" (UID: \"471611d1-57b0-4b13-9f95-5c3505ce0aff\") " pod="openshift-marketplace/redhat-marketplace-5ch6l" Jan 23 06:59:38 crc kubenswrapper[4937]: I0123 06:59:38.510936 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5ch6l" Jan 23 06:59:38 crc kubenswrapper[4937]: I0123 06:59:38.650780 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 06:59:40 crc kubenswrapper[4937]: I0123 06:59:40.264071 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ch6l"] Jan 23 06:59:40 crc kubenswrapper[4937]: I0123 06:59:40.269280 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" event={"ID":"62611df4-293b-4aea-8a00-cbcfc2ffdfaf","Type":"ContainerStarted","Data":"a9d135c7c71cd0c1c81386b4888c5082691e168a6d159e0e41d1d75880319a84"} Jan 23 06:59:40 crc kubenswrapper[4937]: I0123 06:59:40.297064 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" podStartSLOduration=2.346471307 podStartE2EDuration="12.297041259s" podCreationTimestamp="2026-01-23 06:59:28 +0000 UTC" firstStartedPulling="2026-01-23 06:59:29.879340742 +0000 UTC m=+1569.683107395" lastFinishedPulling="2026-01-23 06:59:39.829910694 +0000 UTC m=+1579.633677347" observedRunningTime="2026-01-23 06:59:40.284576979 +0000 UTC m=+1580.088343622" watchObservedRunningTime="2026-01-23 06:59:40.297041259 +0000 UTC m=+1580.100807912" Jan 23 06:59:41 crc kubenswrapper[4937]: I0123 06:59:41.289265 4937 generic.go:334] "Generic (PLEG): container finished" podID="471611d1-57b0-4b13-9f95-5c3505ce0aff" containerID="b93c7dcdd9ebf8e3cabd549323fa2c764217d30d7e79437fcb8ea1bd6bc993af" exitCode=0 Jan 23 06:59:41 crc kubenswrapper[4937]: I0123 06:59:41.289352 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ch6l" event={"ID":"471611d1-57b0-4b13-9f95-5c3505ce0aff","Type":"ContainerDied","Data":"b93c7dcdd9ebf8e3cabd549323fa2c764217d30d7e79437fcb8ea1bd6bc993af"} Jan 23 06:59:41 crc kubenswrapper[4937]: I0123 06:59:41.289617 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ch6l" event={"ID":"471611d1-57b0-4b13-9f95-5c3505ce0aff","Type":"ContainerStarted","Data":"07ae395ee9b2b5f65e1e34c42cc99c9c44a37c0fcbea284c4cb8a8413d881b25"} Jan 23 06:59:42 crc kubenswrapper[4937]: I0123 06:59:42.304766 4937 generic.go:334] "Generic (PLEG): container finished" podID="471611d1-57b0-4b13-9f95-5c3505ce0aff" containerID="749387be0cee0e1953a388f71e5541d17de4c0f0650617775ca40a02ca05962b" exitCode=0 Jan 23 06:59:42 crc kubenswrapper[4937]: I0123 06:59:42.304849 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ch6l" event={"ID":"471611d1-57b0-4b13-9f95-5c3505ce0aff","Type":"ContainerDied","Data":"749387be0cee0e1953a388f71e5541d17de4c0f0650617775ca40a02ca05962b"} Jan 23 06:59:44 crc kubenswrapper[4937]: I0123 06:59:44.324308 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ch6l" event={"ID":"471611d1-57b0-4b13-9f95-5c3505ce0aff","Type":"ContainerStarted","Data":"223cdcda50687c204e2bf94ff48cf226cd99abdb89943f1d41af9a9db4c2d8fa"} Jan 23 06:59:44 crc kubenswrapper[4937]: I0123 06:59:44.354431 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5ch6l" podStartSLOduration=4.594658216 podStartE2EDuration="6.354407231s" podCreationTimestamp="2026-01-23 06:59:38 +0000 UTC" firstStartedPulling="2026-01-23 06:59:41.292027772 +0000 UTC m=+1581.095794425" lastFinishedPulling="2026-01-23 06:59:43.051776787 +0000 UTC m=+1582.855543440" observedRunningTime="2026-01-23 06:59:44.343078162 +0000 UTC m=+1584.146844845" watchObservedRunningTime="2026-01-23 06:59:44.354407231 +0000 UTC m=+1584.158173904" Jan 23 06:59:48 crc kubenswrapper[4937]: I0123 06:59:48.511737 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5ch6l" Jan 23 06:59:48 crc kubenswrapper[4937]: I0123 06:59:48.514879 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5ch6l" Jan 23 06:59:48 crc kubenswrapper[4937]: I0123 06:59:48.571003 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5ch6l" Jan 23 06:59:49 crc kubenswrapper[4937]: I0123 06:59:49.430826 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5ch6l" Jan 23 06:59:49 crc kubenswrapper[4937]: I0123 06:59:49.490702 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ch6l"] Jan 23 06:59:51 crc kubenswrapper[4937]: I0123 06:59:51.397318 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5ch6l" podUID="471611d1-57b0-4b13-9f95-5c3505ce0aff" containerName="registry-server" containerID="cri-o://223cdcda50687c204e2bf94ff48cf226cd99abdb89943f1d41af9a9db4c2d8fa" gracePeriod=2 Jan 23 06:59:51 crc kubenswrapper[4937]: I0123 06:59:51.883248 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5ch6l" Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.047348 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mllz2\" (UniqueName: \"kubernetes.io/projected/471611d1-57b0-4b13-9f95-5c3505ce0aff-kube-api-access-mllz2\") pod \"471611d1-57b0-4b13-9f95-5c3505ce0aff\" (UID: \"471611d1-57b0-4b13-9f95-5c3505ce0aff\") " Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.047525 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/471611d1-57b0-4b13-9f95-5c3505ce0aff-utilities\") pod \"471611d1-57b0-4b13-9f95-5c3505ce0aff\" (UID: \"471611d1-57b0-4b13-9f95-5c3505ce0aff\") " Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.047723 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/471611d1-57b0-4b13-9f95-5c3505ce0aff-catalog-content\") pod \"471611d1-57b0-4b13-9f95-5c3505ce0aff\" (UID: \"471611d1-57b0-4b13-9f95-5c3505ce0aff\") " Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.049241 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/471611d1-57b0-4b13-9f95-5c3505ce0aff-utilities" (OuterVolumeSpecName: "utilities") pod "471611d1-57b0-4b13-9f95-5c3505ce0aff" (UID: "471611d1-57b0-4b13-9f95-5c3505ce0aff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.057858 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/471611d1-57b0-4b13-9f95-5c3505ce0aff-kube-api-access-mllz2" (OuterVolumeSpecName: "kube-api-access-mllz2") pod "471611d1-57b0-4b13-9f95-5c3505ce0aff" (UID: "471611d1-57b0-4b13-9f95-5c3505ce0aff"). InnerVolumeSpecName "kube-api-access-mllz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.100023 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/471611d1-57b0-4b13-9f95-5c3505ce0aff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "471611d1-57b0-4b13-9f95-5c3505ce0aff" (UID: "471611d1-57b0-4b13-9f95-5c3505ce0aff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.151823 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/471611d1-57b0-4b13-9f95-5c3505ce0aff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.151861 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mllz2\" (UniqueName: \"kubernetes.io/projected/471611d1-57b0-4b13-9f95-5c3505ce0aff-kube-api-access-mllz2\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.151876 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/471611d1-57b0-4b13-9f95-5c3505ce0aff-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.414944 4937 generic.go:334] "Generic (PLEG): container finished" podID="62611df4-293b-4aea-8a00-cbcfc2ffdfaf" containerID="a9d135c7c71cd0c1c81386b4888c5082691e168a6d159e0e41d1d75880319a84" exitCode=0 Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.415088 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" event={"ID":"62611df4-293b-4aea-8a00-cbcfc2ffdfaf","Type":"ContainerDied","Data":"a9d135c7c71cd0c1c81386b4888c5082691e168a6d159e0e41d1d75880319a84"} Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.421877 4937 generic.go:334] "Generic (PLEG): container finished" podID="471611d1-57b0-4b13-9f95-5c3505ce0aff" containerID="223cdcda50687c204e2bf94ff48cf226cd99abdb89943f1d41af9a9db4c2d8fa" exitCode=0 Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.422023 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5ch6l" Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.422033 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ch6l" event={"ID":"471611d1-57b0-4b13-9f95-5c3505ce0aff","Type":"ContainerDied","Data":"223cdcda50687c204e2bf94ff48cf226cd99abdb89943f1d41af9a9db4c2d8fa"} Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.422189 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ch6l" event={"ID":"471611d1-57b0-4b13-9f95-5c3505ce0aff","Type":"ContainerDied","Data":"07ae395ee9b2b5f65e1e34c42cc99c9c44a37c0fcbea284c4cb8a8413d881b25"} Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.422226 4937 scope.go:117] "RemoveContainer" containerID="223cdcda50687c204e2bf94ff48cf226cd99abdb89943f1d41af9a9db4c2d8fa" Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.474709 4937 scope.go:117] "RemoveContainer" containerID="749387be0cee0e1953a388f71e5541d17de4c0f0650617775ca40a02ca05962b" Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.488139 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ch6l"] Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.497447 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ch6l"] Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.502976 4937 scope.go:117] "RemoveContainer" containerID="b93c7dcdd9ebf8e3cabd549323fa2c764217d30d7e79437fcb8ea1bd6bc993af" Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.544787 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="471611d1-57b0-4b13-9f95-5c3505ce0aff" path="/var/lib/kubelet/pods/471611d1-57b0-4b13-9f95-5c3505ce0aff/volumes" Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.551834 4937 scope.go:117] "RemoveContainer" containerID="223cdcda50687c204e2bf94ff48cf226cd99abdb89943f1d41af9a9db4c2d8fa" Jan 23 06:59:52 crc kubenswrapper[4937]: E0123 06:59:52.553024 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"223cdcda50687c204e2bf94ff48cf226cd99abdb89943f1d41af9a9db4c2d8fa\": container with ID starting with 223cdcda50687c204e2bf94ff48cf226cd99abdb89943f1d41af9a9db4c2d8fa not found: ID does not exist" containerID="223cdcda50687c204e2bf94ff48cf226cd99abdb89943f1d41af9a9db4c2d8fa" Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.553052 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"223cdcda50687c204e2bf94ff48cf226cd99abdb89943f1d41af9a9db4c2d8fa"} err="failed to get container status \"223cdcda50687c204e2bf94ff48cf226cd99abdb89943f1d41af9a9db4c2d8fa\": rpc error: code = NotFound desc = could not find container \"223cdcda50687c204e2bf94ff48cf226cd99abdb89943f1d41af9a9db4c2d8fa\": container with ID starting with 223cdcda50687c204e2bf94ff48cf226cd99abdb89943f1d41af9a9db4c2d8fa not found: ID does not exist" Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.553074 4937 scope.go:117] "RemoveContainer" containerID="749387be0cee0e1953a388f71e5541d17de4c0f0650617775ca40a02ca05962b" Jan 23 06:59:52 crc kubenswrapper[4937]: E0123 06:59:52.553402 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"749387be0cee0e1953a388f71e5541d17de4c0f0650617775ca40a02ca05962b\": container with ID starting with 749387be0cee0e1953a388f71e5541d17de4c0f0650617775ca40a02ca05962b not found: ID does not exist" containerID="749387be0cee0e1953a388f71e5541d17de4c0f0650617775ca40a02ca05962b" Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.553425 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"749387be0cee0e1953a388f71e5541d17de4c0f0650617775ca40a02ca05962b"} err="failed to get container status \"749387be0cee0e1953a388f71e5541d17de4c0f0650617775ca40a02ca05962b\": rpc error: code = NotFound desc = could not find container \"749387be0cee0e1953a388f71e5541d17de4c0f0650617775ca40a02ca05962b\": container with ID starting with 749387be0cee0e1953a388f71e5541d17de4c0f0650617775ca40a02ca05962b not found: ID does not exist" Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.553522 4937 scope.go:117] "RemoveContainer" containerID="b93c7dcdd9ebf8e3cabd549323fa2c764217d30d7e79437fcb8ea1bd6bc993af" Jan 23 06:59:52 crc kubenswrapper[4937]: E0123 06:59:52.554151 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b93c7dcdd9ebf8e3cabd549323fa2c764217d30d7e79437fcb8ea1bd6bc993af\": container with ID starting with b93c7dcdd9ebf8e3cabd549323fa2c764217d30d7e79437fcb8ea1bd6bc993af not found: ID does not exist" containerID="b93c7dcdd9ebf8e3cabd549323fa2c764217d30d7e79437fcb8ea1bd6bc993af" Jan 23 06:59:52 crc kubenswrapper[4937]: I0123 06:59:52.554231 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b93c7dcdd9ebf8e3cabd549323fa2c764217d30d7e79437fcb8ea1bd6bc993af"} err="failed to get container status \"b93c7dcdd9ebf8e3cabd549323fa2c764217d30d7e79437fcb8ea1bd6bc993af\": rpc error: code = NotFound desc = could not find container \"b93c7dcdd9ebf8e3cabd549323fa2c764217d30d7e79437fcb8ea1bd6bc993af\": container with ID starting with b93c7dcdd9ebf8e3cabd549323fa2c764217d30d7e79437fcb8ea1bd6bc993af not found: ID does not exist" Jan 23 06:59:53 crc kubenswrapper[4937]: I0123 06:59:53.914808 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.092723 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c55nr\" (UniqueName: \"kubernetes.io/projected/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-kube-api-access-c55nr\") pod \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\" (UID: \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\") " Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.092930 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-repo-setup-combined-ca-bundle\") pod \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\" (UID: \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\") " Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.092997 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-inventory\") pod \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\" (UID: \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\") " Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.093117 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-ssh-key-openstack-edpm-ipam\") pod \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\" (UID: \"62611df4-293b-4aea-8a00-cbcfc2ffdfaf\") " Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.097996 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-kube-api-access-c55nr" (OuterVolumeSpecName: "kube-api-access-c55nr") pod "62611df4-293b-4aea-8a00-cbcfc2ffdfaf" (UID: "62611df4-293b-4aea-8a00-cbcfc2ffdfaf"). InnerVolumeSpecName "kube-api-access-c55nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.098006 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "62611df4-293b-4aea-8a00-cbcfc2ffdfaf" (UID: "62611df4-293b-4aea-8a00-cbcfc2ffdfaf"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.121642 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "62611df4-293b-4aea-8a00-cbcfc2ffdfaf" (UID: "62611df4-293b-4aea-8a00-cbcfc2ffdfaf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.121967 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-inventory" (OuterVolumeSpecName: "inventory") pod "62611df4-293b-4aea-8a00-cbcfc2ffdfaf" (UID: "62611df4-293b-4aea-8a00-cbcfc2ffdfaf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.195889 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c55nr\" (UniqueName: \"kubernetes.io/projected/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-kube-api-access-c55nr\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.195918 4937 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.195932 4937 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.195948 4937 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62611df4-293b-4aea-8a00-cbcfc2ffdfaf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.446742 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" event={"ID":"62611df4-293b-4aea-8a00-cbcfc2ffdfaf","Type":"ContainerDied","Data":"e22b6f89fa5027c4300a7f06791eeafe29423d1a4ef5f7bd4bb5368d95478ce7"} Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.446783 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e22b6f89fa5027c4300a7f06791eeafe29423d1a4ef5f7bd4bb5368d95478ce7" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.446848 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.575180 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx"] Jan 23 06:59:54 crc kubenswrapper[4937]: E0123 06:59:54.576789 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="471611d1-57b0-4b13-9f95-5c3505ce0aff" containerName="extract-content" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.576997 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="471611d1-57b0-4b13-9f95-5c3505ce0aff" containerName="extract-content" Jan 23 06:59:54 crc kubenswrapper[4937]: E0123 06:59:54.577423 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="471611d1-57b0-4b13-9f95-5c3505ce0aff" containerName="registry-server" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.577557 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="471611d1-57b0-4b13-9f95-5c3505ce0aff" containerName="registry-server" Jan 23 06:59:54 crc kubenswrapper[4937]: E0123 06:59:54.578188 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="471611d1-57b0-4b13-9f95-5c3505ce0aff" containerName="extract-utilities" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.578322 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="471611d1-57b0-4b13-9f95-5c3505ce0aff" containerName="extract-utilities" Jan 23 06:59:54 crc kubenswrapper[4937]: E0123 06:59:54.578436 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62611df4-293b-4aea-8a00-cbcfc2ffdfaf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.578564 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="62611df4-293b-4aea-8a00-cbcfc2ffdfaf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.579475 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="62611df4-293b-4aea-8a00-cbcfc2ffdfaf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.581716 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="471611d1-57b0-4b13-9f95-5c3505ce0aff" containerName="registry-server" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.583154 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.585922 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.586108 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.586583 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.591512 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rm45q" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.592185 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx"] Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.730310 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54kz7\" (UniqueName: \"kubernetes.io/projected/f2dda152-b150-489b-a647-01f4a252be0a-kube-api-access-54kz7\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mjvdx\" (UID: \"f2dda152-b150-489b-a647-01f4a252be0a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.730451 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2dda152-b150-489b-a647-01f4a252be0a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mjvdx\" (UID: \"f2dda152-b150-489b-a647-01f4a252be0a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.730473 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2dda152-b150-489b-a647-01f4a252be0a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mjvdx\" (UID: \"f2dda152-b150-489b-a647-01f4a252be0a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.832603 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54kz7\" (UniqueName: \"kubernetes.io/projected/f2dda152-b150-489b-a647-01f4a252be0a-kube-api-access-54kz7\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mjvdx\" (UID: \"f2dda152-b150-489b-a647-01f4a252be0a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.832765 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2dda152-b150-489b-a647-01f4a252be0a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mjvdx\" (UID: \"f2dda152-b150-489b-a647-01f4a252be0a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.832803 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2dda152-b150-489b-a647-01f4a252be0a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mjvdx\" (UID: \"f2dda152-b150-489b-a647-01f4a252be0a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.839298 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2dda152-b150-489b-a647-01f4a252be0a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mjvdx\" (UID: \"f2dda152-b150-489b-a647-01f4a252be0a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.842361 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2dda152-b150-489b-a647-01f4a252be0a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mjvdx\" (UID: \"f2dda152-b150-489b-a647-01f4a252be0a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.850183 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54kz7\" (UniqueName: \"kubernetes.io/projected/f2dda152-b150-489b-a647-01f4a252be0a-kube-api-access-54kz7\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-mjvdx\" (UID: \"f2dda152-b150-489b-a647-01f4a252be0a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx" Jan 23 06:59:54 crc kubenswrapper[4937]: I0123 06:59:54.898769 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx" Jan 23 06:59:55 crc kubenswrapper[4937]: I0123 06:59:55.457476 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx"] Jan 23 06:59:55 crc kubenswrapper[4937]: I0123 06:59:55.467309 4937 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 06:59:56 crc kubenswrapper[4937]: I0123 06:59:56.468944 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx" event={"ID":"f2dda152-b150-489b-a647-01f4a252be0a","Type":"ContainerStarted","Data":"670af50d47e8fec37076f774c7ba29e371703920fb4e1c23e21e4cc13183e94c"} Jan 23 06:59:57 crc kubenswrapper[4937]: I0123 06:59:57.482726 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx" event={"ID":"f2dda152-b150-489b-a647-01f4a252be0a","Type":"ContainerStarted","Data":"abd7a33e21d5532162f09fdc744edb4fa310c1b63bdcc2d73b05fb033e540a6d"} Jan 23 06:59:57 crc kubenswrapper[4937]: I0123 06:59:57.513660 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx" podStartSLOduration=2.140950333 podStartE2EDuration="3.513636225s" podCreationTimestamp="2026-01-23 06:59:54 +0000 UTC" firstStartedPulling="2026-01-23 06:59:55.467134509 +0000 UTC m=+1595.270901152" lastFinishedPulling="2026-01-23 06:59:56.839820391 +0000 UTC m=+1596.643587044" observedRunningTime="2026-01-23 06:59:57.498192055 +0000 UTC m=+1597.301958718" watchObservedRunningTime="2026-01-23 06:59:57.513636225 +0000 UTC m=+1597.317402888" Jan 23 07:00:00 crc kubenswrapper[4937]: I0123 07:00:00.152226 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs"] Jan 23 07:00:00 crc kubenswrapper[4937]: I0123 07:00:00.154079 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs" Jan 23 07:00:00 crc kubenswrapper[4937]: I0123 07:00:00.164336 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs"] Jan 23 07:00:00 crc kubenswrapper[4937]: I0123 07:00:00.189821 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6897fe65-100e-44a9-a48f-ddd8d84dc839-secret-volume\") pod \"collect-profiles-29485860-655fs\" (UID: \"6897fe65-100e-44a9-a48f-ddd8d84dc839\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs" Jan 23 07:00:00 crc kubenswrapper[4937]: I0123 07:00:00.190116 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 07:00:00 crc kubenswrapper[4937]: I0123 07:00:00.190225 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 07:00:00 crc kubenswrapper[4937]: I0123 07:00:00.190726 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6897fe65-100e-44a9-a48f-ddd8d84dc839-config-volume\") pod \"collect-profiles-29485860-655fs\" (UID: \"6897fe65-100e-44a9-a48f-ddd8d84dc839\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs" Jan 23 07:00:00 crc kubenswrapper[4937]: I0123 07:00:00.190819 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8bl2\" (UniqueName: \"kubernetes.io/projected/6897fe65-100e-44a9-a48f-ddd8d84dc839-kube-api-access-h8bl2\") pod \"collect-profiles-29485860-655fs\" (UID: \"6897fe65-100e-44a9-a48f-ddd8d84dc839\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs" Jan 23 07:00:00 crc kubenswrapper[4937]: I0123 07:00:00.292653 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6897fe65-100e-44a9-a48f-ddd8d84dc839-config-volume\") pod \"collect-profiles-29485860-655fs\" (UID: \"6897fe65-100e-44a9-a48f-ddd8d84dc839\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs" Jan 23 07:00:00 crc kubenswrapper[4937]: I0123 07:00:00.292740 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8bl2\" (UniqueName: \"kubernetes.io/projected/6897fe65-100e-44a9-a48f-ddd8d84dc839-kube-api-access-h8bl2\") pod \"collect-profiles-29485860-655fs\" (UID: \"6897fe65-100e-44a9-a48f-ddd8d84dc839\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs" Jan 23 07:00:00 crc kubenswrapper[4937]: I0123 07:00:00.292867 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6897fe65-100e-44a9-a48f-ddd8d84dc839-secret-volume\") pod \"collect-profiles-29485860-655fs\" (UID: \"6897fe65-100e-44a9-a48f-ddd8d84dc839\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs" Jan 23 07:00:00 crc kubenswrapper[4937]: I0123 07:00:00.293563 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6897fe65-100e-44a9-a48f-ddd8d84dc839-config-volume\") pod \"collect-profiles-29485860-655fs\" (UID: \"6897fe65-100e-44a9-a48f-ddd8d84dc839\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs" Jan 23 07:00:00 crc kubenswrapper[4937]: I0123 07:00:00.299266 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6897fe65-100e-44a9-a48f-ddd8d84dc839-secret-volume\") pod \"collect-profiles-29485860-655fs\" (UID: \"6897fe65-100e-44a9-a48f-ddd8d84dc839\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs" Jan 23 07:00:00 crc kubenswrapper[4937]: I0123 07:00:00.308513 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8bl2\" (UniqueName: \"kubernetes.io/projected/6897fe65-100e-44a9-a48f-ddd8d84dc839-kube-api-access-h8bl2\") pod \"collect-profiles-29485860-655fs\" (UID: \"6897fe65-100e-44a9-a48f-ddd8d84dc839\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs" Jan 23 07:00:00 crc kubenswrapper[4937]: I0123 07:00:00.515968 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs" Jan 23 07:00:00 crc kubenswrapper[4937]: I0123 07:00:00.528380 4937 generic.go:334] "Generic (PLEG): container finished" podID="f2dda152-b150-489b-a647-01f4a252be0a" containerID="abd7a33e21d5532162f09fdc744edb4fa310c1b63bdcc2d73b05fb033e540a6d" exitCode=0 Jan 23 07:00:00 crc kubenswrapper[4937]: I0123 07:00:00.546261 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx" event={"ID":"f2dda152-b150-489b-a647-01f4a252be0a","Type":"ContainerDied","Data":"abd7a33e21d5532162f09fdc744edb4fa310c1b63bdcc2d73b05fb033e540a6d"} Jan 23 07:00:00 crc kubenswrapper[4937]: I0123 07:00:00.988535 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs"] Jan 23 07:00:00 crc kubenswrapper[4937]: W0123 07:00:00.992676 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6897fe65_100e_44a9_a48f_ddd8d84dc839.slice/crio-45cee2d6f55887e362412da41ec500e008d0ffa96b92db349a49c4b45f08af24 WatchSource:0}: Error finding container 45cee2d6f55887e362412da41ec500e008d0ffa96b92db349a49c4b45f08af24: Status 404 returned error can't find the container with id 45cee2d6f55887e362412da41ec500e008d0ffa96b92db349a49c4b45f08af24 Jan 23 07:00:01 crc kubenswrapper[4937]: I0123 07:00:01.539236 4937 generic.go:334] "Generic (PLEG): container finished" podID="6897fe65-100e-44a9-a48f-ddd8d84dc839" containerID="f8a15b5c27c437ca1ed9c98c65d0ee46e1e5ac7bcdee24cafc5b1b98a4c5c99e" exitCode=0 Jan 23 07:00:01 crc kubenswrapper[4937]: I0123 07:00:01.539298 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs" event={"ID":"6897fe65-100e-44a9-a48f-ddd8d84dc839","Type":"ContainerDied","Data":"f8a15b5c27c437ca1ed9c98c65d0ee46e1e5ac7bcdee24cafc5b1b98a4c5c99e"} Jan 23 07:00:01 crc kubenswrapper[4937]: I0123 07:00:01.539754 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs" event={"ID":"6897fe65-100e-44a9-a48f-ddd8d84dc839","Type":"ContainerStarted","Data":"45cee2d6f55887e362412da41ec500e008d0ffa96b92db349a49c4b45f08af24"} Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.036308 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.126039 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2dda152-b150-489b-a647-01f4a252be0a-inventory\") pod \"f2dda152-b150-489b-a647-01f4a252be0a\" (UID: \"f2dda152-b150-489b-a647-01f4a252be0a\") " Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.126258 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54kz7\" (UniqueName: \"kubernetes.io/projected/f2dda152-b150-489b-a647-01f4a252be0a-kube-api-access-54kz7\") pod \"f2dda152-b150-489b-a647-01f4a252be0a\" (UID: \"f2dda152-b150-489b-a647-01f4a252be0a\") " Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.126298 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2dda152-b150-489b-a647-01f4a252be0a-ssh-key-openstack-edpm-ipam\") pod \"f2dda152-b150-489b-a647-01f4a252be0a\" (UID: \"f2dda152-b150-489b-a647-01f4a252be0a\") " Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.141401 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2dda152-b150-489b-a647-01f4a252be0a-kube-api-access-54kz7" (OuterVolumeSpecName: "kube-api-access-54kz7") pod "f2dda152-b150-489b-a647-01f4a252be0a" (UID: "f2dda152-b150-489b-a647-01f4a252be0a"). InnerVolumeSpecName "kube-api-access-54kz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.157935 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2dda152-b150-489b-a647-01f4a252be0a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f2dda152-b150-489b-a647-01f4a252be0a" (UID: "f2dda152-b150-489b-a647-01f4a252be0a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.163938 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2dda152-b150-489b-a647-01f4a252be0a-inventory" (OuterVolumeSpecName: "inventory") pod "f2dda152-b150-489b-a647-01f4a252be0a" (UID: "f2dda152-b150-489b-a647-01f4a252be0a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.228858 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54kz7\" (UniqueName: \"kubernetes.io/projected/f2dda152-b150-489b-a647-01f4a252be0a-kube-api-access-54kz7\") on node \"crc\" DevicePath \"\"" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.229123 4937 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f2dda152-b150-489b-a647-01f4a252be0a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.229133 4937 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2dda152-b150-489b-a647-01f4a252be0a-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.568825 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.569753 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-mjvdx" event={"ID":"f2dda152-b150-489b-a647-01f4a252be0a","Type":"ContainerDied","Data":"670af50d47e8fec37076f774c7ba29e371703920fb4e1c23e21e4cc13183e94c"} Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.569801 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="670af50d47e8fec37076f774c7ba29e371703920fb4e1c23e21e4cc13183e94c" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.655258 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz"] Jan 23 07:00:02 crc kubenswrapper[4937]: E0123 07:00:02.655860 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2dda152-b150-489b-a647-01f4a252be0a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.655890 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2dda152-b150-489b-a647-01f4a252be0a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.656161 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2dda152-b150-489b-a647-01f4a252be0a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.657337 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.660291 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.660389 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.660733 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.660792 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rm45q" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.661630 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00506369-00e0-43c2-be00-23b30a785c87-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz\" (UID: \"00506369-00e0-43c2-be00-23b30a785c87\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.661774 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7n79\" (UniqueName: \"kubernetes.io/projected/00506369-00e0-43c2-be00-23b30a785c87-kube-api-access-s7n79\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz\" (UID: \"00506369-00e0-43c2-be00-23b30a785c87\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.661860 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00506369-00e0-43c2-be00-23b30a785c87-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz\" (UID: \"00506369-00e0-43c2-be00-23b30a785c87\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.661951 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00506369-00e0-43c2-be00-23b30a785c87-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz\" (UID: \"00506369-00e0-43c2-be00-23b30a785c87\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.667657 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz"] Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.763419 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00506369-00e0-43c2-be00-23b30a785c87-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz\" (UID: \"00506369-00e0-43c2-be00-23b30a785c87\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.763614 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7n79\" (UniqueName: \"kubernetes.io/projected/00506369-00e0-43c2-be00-23b30a785c87-kube-api-access-s7n79\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz\" (UID: \"00506369-00e0-43c2-be00-23b30a785c87\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.763691 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00506369-00e0-43c2-be00-23b30a785c87-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz\" (UID: \"00506369-00e0-43c2-be00-23b30a785c87\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.763847 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00506369-00e0-43c2-be00-23b30a785c87-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz\" (UID: \"00506369-00e0-43c2-be00-23b30a785c87\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.768039 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00506369-00e0-43c2-be00-23b30a785c87-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz\" (UID: \"00506369-00e0-43c2-be00-23b30a785c87\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.775216 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00506369-00e0-43c2-be00-23b30a785c87-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz\" (UID: \"00506369-00e0-43c2-be00-23b30a785c87\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.777191 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00506369-00e0-43c2-be00-23b30a785c87-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz\" (UID: \"00506369-00e0-43c2-be00-23b30a785c87\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.780627 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7n79\" (UniqueName: \"kubernetes.io/projected/00506369-00e0-43c2-be00-23b30a785c87-kube-api-access-s7n79\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz\" (UID: \"00506369-00e0-43c2-be00-23b30a785c87\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.914144 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs" Jan 23 07:00:02 crc kubenswrapper[4937]: I0123 07:00:02.979311 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" Jan 23 07:00:03 crc kubenswrapper[4937]: I0123 07:00:03.068902 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6897fe65-100e-44a9-a48f-ddd8d84dc839-secret-volume\") pod \"6897fe65-100e-44a9-a48f-ddd8d84dc839\" (UID: \"6897fe65-100e-44a9-a48f-ddd8d84dc839\") " Jan 23 07:00:03 crc kubenswrapper[4937]: I0123 07:00:03.069221 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8bl2\" (UniqueName: \"kubernetes.io/projected/6897fe65-100e-44a9-a48f-ddd8d84dc839-kube-api-access-h8bl2\") pod \"6897fe65-100e-44a9-a48f-ddd8d84dc839\" (UID: \"6897fe65-100e-44a9-a48f-ddd8d84dc839\") " Jan 23 07:00:03 crc kubenswrapper[4937]: I0123 07:00:03.069362 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6897fe65-100e-44a9-a48f-ddd8d84dc839-config-volume\") pod \"6897fe65-100e-44a9-a48f-ddd8d84dc839\" (UID: \"6897fe65-100e-44a9-a48f-ddd8d84dc839\") " Jan 23 07:00:03 crc kubenswrapper[4937]: I0123 07:00:03.071918 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6897fe65-100e-44a9-a48f-ddd8d84dc839-config-volume" (OuterVolumeSpecName: "config-volume") pod "6897fe65-100e-44a9-a48f-ddd8d84dc839" (UID: "6897fe65-100e-44a9-a48f-ddd8d84dc839"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:00:03 crc kubenswrapper[4937]: I0123 07:00:03.074948 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6897fe65-100e-44a9-a48f-ddd8d84dc839-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6897fe65-100e-44a9-a48f-ddd8d84dc839" (UID: "6897fe65-100e-44a9-a48f-ddd8d84dc839"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:00:03 crc kubenswrapper[4937]: I0123 07:00:03.075079 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6897fe65-100e-44a9-a48f-ddd8d84dc839-kube-api-access-h8bl2" (OuterVolumeSpecName: "kube-api-access-h8bl2") pod "6897fe65-100e-44a9-a48f-ddd8d84dc839" (UID: "6897fe65-100e-44a9-a48f-ddd8d84dc839"). InnerVolumeSpecName "kube-api-access-h8bl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:00:03 crc kubenswrapper[4937]: I0123 07:00:03.174144 4937 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6897fe65-100e-44a9-a48f-ddd8d84dc839-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 07:00:03 crc kubenswrapper[4937]: I0123 07:00:03.174365 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8bl2\" (UniqueName: \"kubernetes.io/projected/6897fe65-100e-44a9-a48f-ddd8d84dc839-kube-api-access-h8bl2\") on node \"crc\" DevicePath \"\"" Jan 23 07:00:03 crc kubenswrapper[4937]: I0123 07:00:03.174378 4937 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6897fe65-100e-44a9-a48f-ddd8d84dc839-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 07:00:03 crc kubenswrapper[4937]: I0123 07:00:03.495895 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz"] Jan 23 07:00:03 crc kubenswrapper[4937]: W0123 07:00:03.499318 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00506369_00e0_43c2_be00_23b30a785c87.slice/crio-bc7f6988617492628fef8634e6cf2f6e4da5a59ad0067000bff7ea6361d90197 WatchSource:0}: Error finding container bc7f6988617492628fef8634e6cf2f6e4da5a59ad0067000bff7ea6361d90197: Status 404 returned error can't find the container with id bc7f6988617492628fef8634e6cf2f6e4da5a59ad0067000bff7ea6361d90197 Jan 23 07:00:03 crc kubenswrapper[4937]: I0123 07:00:03.577711 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" event={"ID":"00506369-00e0-43c2-be00-23b30a785c87","Type":"ContainerStarted","Data":"bc7f6988617492628fef8634e6cf2f6e4da5a59ad0067000bff7ea6361d90197"} Jan 23 07:00:03 crc kubenswrapper[4937]: I0123 07:00:03.578972 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs" event={"ID":"6897fe65-100e-44a9-a48f-ddd8d84dc839","Type":"ContainerDied","Data":"45cee2d6f55887e362412da41ec500e008d0ffa96b92db349a49c4b45f08af24"} Jan 23 07:00:03 crc kubenswrapper[4937]: I0123 07:00:03.578996 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45cee2d6f55887e362412da41ec500e008d0ffa96b92db349a49c4b45f08af24" Jan 23 07:00:03 crc kubenswrapper[4937]: I0123 07:00:03.579011 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs" Jan 23 07:00:04 crc kubenswrapper[4937]: I0123 07:00:04.588725 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" event={"ID":"00506369-00e0-43c2-be00-23b30a785c87","Type":"ContainerStarted","Data":"0192fb0ba05bd370cb59fb54f3f89f4ff4210a9be2165a113a947247f02db6ac"} Jan 23 07:00:04 crc kubenswrapper[4937]: I0123 07:00:04.615049 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" podStartSLOduration=2.066751411 podStartE2EDuration="2.615032497s" podCreationTimestamp="2026-01-23 07:00:02 +0000 UTC" firstStartedPulling="2026-01-23 07:00:03.5016869 +0000 UTC m=+1603.305453553" lastFinishedPulling="2026-01-23 07:00:04.049967986 +0000 UTC m=+1603.853734639" observedRunningTime="2026-01-23 07:00:04.609833436 +0000 UTC m=+1604.413600139" watchObservedRunningTime="2026-01-23 07:00:04.615032497 +0000 UTC m=+1604.418799140" Jan 23 07:00:05 crc kubenswrapper[4937]: I0123 07:00:05.152970 4937 scope.go:117] "RemoveContainer" containerID="240832d2ca864f40d0598d00e9c0362e7b1f0fca7c34deae67606e10da9e0b75" Jan 23 07:00:05 crc kubenswrapper[4937]: I0123 07:00:05.188112 4937 scope.go:117] "RemoveContainer" containerID="a14c265dc9b31f5d1cbc3eefc7cee11f8424393cad9bfe6e4ec011e27248bc82" Jan 23 07:00:05 crc kubenswrapper[4937]: I0123 07:00:05.257810 4937 scope.go:117] "RemoveContainer" containerID="949f654a64ef3088f47025bae16d8082b18c29fdcbe76ff3c6666ade60b8022e" Jan 23 07:00:05 crc kubenswrapper[4937]: I0123 07:00:05.308195 4937 scope.go:117] "RemoveContainer" containerID="1150dab81040022d4c2b88d4e39c49a7006fee9c4e027739d4bb4730700b48b0" Jan 23 07:00:05 crc kubenswrapper[4937]: I0123 07:00:05.339165 4937 scope.go:117] "RemoveContainer" containerID="c60f70e632c7b8c8cd6b41713a5e31fa65aa30046b5713a329baf64d82a031f3" Jan 23 07:00:05 crc kubenswrapper[4937]: I0123 07:00:05.360563 4937 scope.go:117] "RemoveContainer" containerID="3071216e4a6d08febd919524511b485190bef2bff32b13cc5bd21a36385339ad" Jan 23 07:01:00 crc kubenswrapper[4937]: I0123 07:01:00.157432 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29485861-x5bhn"] Jan 23 07:01:00 crc kubenswrapper[4937]: E0123 07:01:00.158440 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6897fe65-100e-44a9-a48f-ddd8d84dc839" containerName="collect-profiles" Jan 23 07:01:00 crc kubenswrapper[4937]: I0123 07:01:00.158456 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="6897fe65-100e-44a9-a48f-ddd8d84dc839" containerName="collect-profiles" Jan 23 07:01:00 crc kubenswrapper[4937]: I0123 07:01:00.158841 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="6897fe65-100e-44a9-a48f-ddd8d84dc839" containerName="collect-profiles" Jan 23 07:01:00 crc kubenswrapper[4937]: I0123 07:01:00.159690 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485861-x5bhn" Jan 23 07:01:00 crc kubenswrapper[4937]: I0123 07:01:00.189191 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29485861-x5bhn"] Jan 23 07:01:00 crc kubenswrapper[4937]: I0123 07:01:00.282159 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da5bf42-46a2-45de-92b9-8276f573fdb0-combined-ca-bundle\") pod \"keystone-cron-29485861-x5bhn\" (UID: \"3da5bf42-46a2-45de-92b9-8276f573fdb0\") " pod="openstack/keystone-cron-29485861-x5bhn" Jan 23 07:01:00 crc kubenswrapper[4937]: I0123 07:01:00.282587 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpbbk\" (UniqueName: \"kubernetes.io/projected/3da5bf42-46a2-45de-92b9-8276f573fdb0-kube-api-access-hpbbk\") pod \"keystone-cron-29485861-x5bhn\" (UID: \"3da5bf42-46a2-45de-92b9-8276f573fdb0\") " pod="openstack/keystone-cron-29485861-x5bhn" Jan 23 07:01:00 crc kubenswrapper[4937]: I0123 07:01:00.282652 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da5bf42-46a2-45de-92b9-8276f573fdb0-config-data\") pod \"keystone-cron-29485861-x5bhn\" (UID: \"3da5bf42-46a2-45de-92b9-8276f573fdb0\") " pod="openstack/keystone-cron-29485861-x5bhn" Jan 23 07:01:00 crc kubenswrapper[4937]: I0123 07:01:00.282703 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3da5bf42-46a2-45de-92b9-8276f573fdb0-fernet-keys\") pod \"keystone-cron-29485861-x5bhn\" (UID: \"3da5bf42-46a2-45de-92b9-8276f573fdb0\") " pod="openstack/keystone-cron-29485861-x5bhn" Jan 23 07:01:00 crc kubenswrapper[4937]: I0123 07:01:00.404734 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da5bf42-46a2-45de-92b9-8276f573fdb0-combined-ca-bundle\") pod \"keystone-cron-29485861-x5bhn\" (UID: \"3da5bf42-46a2-45de-92b9-8276f573fdb0\") " pod="openstack/keystone-cron-29485861-x5bhn" Jan 23 07:01:00 crc kubenswrapper[4937]: I0123 07:01:00.405027 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpbbk\" (UniqueName: \"kubernetes.io/projected/3da5bf42-46a2-45de-92b9-8276f573fdb0-kube-api-access-hpbbk\") pod \"keystone-cron-29485861-x5bhn\" (UID: \"3da5bf42-46a2-45de-92b9-8276f573fdb0\") " pod="openstack/keystone-cron-29485861-x5bhn" Jan 23 07:01:00 crc kubenswrapper[4937]: I0123 07:01:00.405183 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da5bf42-46a2-45de-92b9-8276f573fdb0-config-data\") pod \"keystone-cron-29485861-x5bhn\" (UID: \"3da5bf42-46a2-45de-92b9-8276f573fdb0\") " pod="openstack/keystone-cron-29485861-x5bhn" Jan 23 07:01:00 crc kubenswrapper[4937]: I0123 07:01:00.405346 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3da5bf42-46a2-45de-92b9-8276f573fdb0-fernet-keys\") pod \"keystone-cron-29485861-x5bhn\" (UID: \"3da5bf42-46a2-45de-92b9-8276f573fdb0\") " pod="openstack/keystone-cron-29485861-x5bhn" Jan 23 07:01:00 crc kubenswrapper[4937]: I0123 07:01:00.416734 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da5bf42-46a2-45de-92b9-8276f573fdb0-combined-ca-bundle\") pod \"keystone-cron-29485861-x5bhn\" (UID: \"3da5bf42-46a2-45de-92b9-8276f573fdb0\") " pod="openstack/keystone-cron-29485861-x5bhn" Jan 23 07:01:00 crc kubenswrapper[4937]: I0123 07:01:00.417084 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3da5bf42-46a2-45de-92b9-8276f573fdb0-fernet-keys\") pod \"keystone-cron-29485861-x5bhn\" (UID: \"3da5bf42-46a2-45de-92b9-8276f573fdb0\") " pod="openstack/keystone-cron-29485861-x5bhn" Jan 23 07:01:00 crc kubenswrapper[4937]: I0123 07:01:00.417250 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da5bf42-46a2-45de-92b9-8276f573fdb0-config-data\") pod \"keystone-cron-29485861-x5bhn\" (UID: \"3da5bf42-46a2-45de-92b9-8276f573fdb0\") " pod="openstack/keystone-cron-29485861-x5bhn" Jan 23 07:01:00 crc kubenswrapper[4937]: I0123 07:01:00.426211 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpbbk\" (UniqueName: \"kubernetes.io/projected/3da5bf42-46a2-45de-92b9-8276f573fdb0-kube-api-access-hpbbk\") pod \"keystone-cron-29485861-x5bhn\" (UID: \"3da5bf42-46a2-45de-92b9-8276f573fdb0\") " pod="openstack/keystone-cron-29485861-x5bhn" Jan 23 07:01:00 crc kubenswrapper[4937]: I0123 07:01:00.484338 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485861-x5bhn" Jan 23 07:01:00 crc kubenswrapper[4937]: I0123 07:01:00.766380 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29485861-x5bhn"] Jan 23 07:01:01 crc kubenswrapper[4937]: I0123 07:01:01.221886 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485861-x5bhn" event={"ID":"3da5bf42-46a2-45de-92b9-8276f573fdb0","Type":"ContainerStarted","Data":"af0d65a10091aaaa6ecd524a3a3f3dbf2db4b5a856da3b0c59f0f077941bf0f2"} Jan 23 07:01:01 crc kubenswrapper[4937]: I0123 07:01:01.221936 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485861-x5bhn" event={"ID":"3da5bf42-46a2-45de-92b9-8276f573fdb0","Type":"ContainerStarted","Data":"f4ed7ea52109ce8f14cb00270e5c788f8da600422837c9c7ab41caa896383338"} Jan 23 07:01:01 crc kubenswrapper[4937]: I0123 07:01:01.252265 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29485861-x5bhn" podStartSLOduration=1.252245337 podStartE2EDuration="1.252245337s" podCreationTimestamp="2026-01-23 07:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:01:01.248112425 +0000 UTC m=+1661.051879088" watchObservedRunningTime="2026-01-23 07:01:01.252245337 +0000 UTC m=+1661.056012000" Jan 23 07:01:04 crc kubenswrapper[4937]: I0123 07:01:04.256662 4937 generic.go:334] "Generic (PLEG): container finished" podID="3da5bf42-46a2-45de-92b9-8276f573fdb0" containerID="af0d65a10091aaaa6ecd524a3a3f3dbf2db4b5a856da3b0c59f0f077941bf0f2" exitCode=0 Jan 23 07:01:04 crc kubenswrapper[4937]: I0123 07:01:04.256731 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485861-x5bhn" event={"ID":"3da5bf42-46a2-45de-92b9-8276f573fdb0","Type":"ContainerDied","Data":"af0d65a10091aaaa6ecd524a3a3f3dbf2db4b5a856da3b0c59f0f077941bf0f2"} Jan 23 07:01:05 crc kubenswrapper[4937]: I0123 07:01:05.578521 4937 scope.go:117] "RemoveContainer" containerID="24c4169587c6655d6f1c8b3009a09680b3ad08bc8acab0352ba722016c575f64" Jan 23 07:01:05 crc kubenswrapper[4937]: I0123 07:01:05.623104 4937 scope.go:117] "RemoveContainer" containerID="ee5c4e621e16a9a656d2eaa91903c3b1165ca55a181e43ce4ef8bd0012b061f5" Jan 23 07:01:05 crc kubenswrapper[4937]: I0123 07:01:05.656665 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485861-x5bhn" Jan 23 07:01:05 crc kubenswrapper[4937]: I0123 07:01:05.817223 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da5bf42-46a2-45de-92b9-8276f573fdb0-config-data\") pod \"3da5bf42-46a2-45de-92b9-8276f573fdb0\" (UID: \"3da5bf42-46a2-45de-92b9-8276f573fdb0\") " Jan 23 07:01:05 crc kubenswrapper[4937]: I0123 07:01:05.817276 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3da5bf42-46a2-45de-92b9-8276f573fdb0-fernet-keys\") pod \"3da5bf42-46a2-45de-92b9-8276f573fdb0\" (UID: \"3da5bf42-46a2-45de-92b9-8276f573fdb0\") " Jan 23 07:01:05 crc kubenswrapper[4937]: I0123 07:01:05.817428 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpbbk\" (UniqueName: \"kubernetes.io/projected/3da5bf42-46a2-45de-92b9-8276f573fdb0-kube-api-access-hpbbk\") pod \"3da5bf42-46a2-45de-92b9-8276f573fdb0\" (UID: \"3da5bf42-46a2-45de-92b9-8276f573fdb0\") " Jan 23 07:01:05 crc kubenswrapper[4937]: I0123 07:01:05.817522 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da5bf42-46a2-45de-92b9-8276f573fdb0-combined-ca-bundle\") pod \"3da5bf42-46a2-45de-92b9-8276f573fdb0\" (UID: \"3da5bf42-46a2-45de-92b9-8276f573fdb0\") " Jan 23 07:01:05 crc kubenswrapper[4937]: I0123 07:01:05.824809 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da5bf42-46a2-45de-92b9-8276f573fdb0-kube-api-access-hpbbk" (OuterVolumeSpecName: "kube-api-access-hpbbk") pod "3da5bf42-46a2-45de-92b9-8276f573fdb0" (UID: "3da5bf42-46a2-45de-92b9-8276f573fdb0"). InnerVolumeSpecName "kube-api-access-hpbbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:01:05 crc kubenswrapper[4937]: I0123 07:01:05.829195 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da5bf42-46a2-45de-92b9-8276f573fdb0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3da5bf42-46a2-45de-92b9-8276f573fdb0" (UID: "3da5bf42-46a2-45de-92b9-8276f573fdb0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:01:05 crc kubenswrapper[4937]: I0123 07:01:05.848781 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da5bf42-46a2-45de-92b9-8276f573fdb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3da5bf42-46a2-45de-92b9-8276f573fdb0" (UID: "3da5bf42-46a2-45de-92b9-8276f573fdb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:01:05 crc kubenswrapper[4937]: I0123 07:01:05.887407 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da5bf42-46a2-45de-92b9-8276f573fdb0-config-data" (OuterVolumeSpecName: "config-data") pod "3da5bf42-46a2-45de-92b9-8276f573fdb0" (UID: "3da5bf42-46a2-45de-92b9-8276f573fdb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:01:05 crc kubenswrapper[4937]: I0123 07:01:05.919445 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da5bf42-46a2-45de-92b9-8276f573fdb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:01:05 crc kubenswrapper[4937]: I0123 07:01:05.919484 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da5bf42-46a2-45de-92b9-8276f573fdb0-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 07:01:05 crc kubenswrapper[4937]: I0123 07:01:05.919498 4937 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3da5bf42-46a2-45de-92b9-8276f573fdb0-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 07:01:05 crc kubenswrapper[4937]: I0123 07:01:05.919509 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpbbk\" (UniqueName: \"kubernetes.io/projected/3da5bf42-46a2-45de-92b9-8276f573fdb0-kube-api-access-hpbbk\") on node \"crc\" DevicePath \"\"" Jan 23 07:01:06 crc kubenswrapper[4937]: I0123 07:01:06.275466 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485861-x5bhn" event={"ID":"3da5bf42-46a2-45de-92b9-8276f573fdb0","Type":"ContainerDied","Data":"f4ed7ea52109ce8f14cb00270e5c788f8da600422837c9c7ab41caa896383338"} Jan 23 07:01:06 crc kubenswrapper[4937]: I0123 07:01:06.275511 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4ed7ea52109ce8f14cb00270e5c788f8da600422837c9c7ab41caa896383338" Jan 23 07:01:06 crc kubenswrapper[4937]: I0123 07:01:06.275548 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485861-x5bhn" Jan 23 07:01:07 crc kubenswrapper[4937]: I0123 07:01:07.724256 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:01:07 crc kubenswrapper[4937]: I0123 07:01:07.724609 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:01:25 crc kubenswrapper[4937]: I0123 07:01:25.298440 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bk6qr"] Jan 23 07:01:25 crc kubenswrapper[4937]: E0123 07:01:25.299725 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da5bf42-46a2-45de-92b9-8276f573fdb0" containerName="keystone-cron" Jan 23 07:01:25 crc kubenswrapper[4937]: I0123 07:01:25.299741 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da5bf42-46a2-45de-92b9-8276f573fdb0" containerName="keystone-cron" Jan 23 07:01:25 crc kubenswrapper[4937]: I0123 07:01:25.300042 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da5bf42-46a2-45de-92b9-8276f573fdb0" containerName="keystone-cron" Jan 23 07:01:25 crc kubenswrapper[4937]: I0123 07:01:25.302223 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk6qr" Jan 23 07:01:25 crc kubenswrapper[4937]: I0123 07:01:25.318911 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bk6qr"] Jan 23 07:01:25 crc kubenswrapper[4937]: I0123 07:01:25.427009 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5csm\" (UniqueName: \"kubernetes.io/projected/64cf2991-b50b-4833-ac01-17dec481ca60-kube-api-access-n5csm\") pod \"community-operators-bk6qr\" (UID: \"64cf2991-b50b-4833-ac01-17dec481ca60\") " pod="openshift-marketplace/community-operators-bk6qr" Jan 23 07:01:25 crc kubenswrapper[4937]: I0123 07:01:25.427125 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64cf2991-b50b-4833-ac01-17dec481ca60-utilities\") pod \"community-operators-bk6qr\" (UID: \"64cf2991-b50b-4833-ac01-17dec481ca60\") " pod="openshift-marketplace/community-operators-bk6qr" Jan 23 07:01:25 crc kubenswrapper[4937]: I0123 07:01:25.427177 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64cf2991-b50b-4833-ac01-17dec481ca60-catalog-content\") pod \"community-operators-bk6qr\" (UID: \"64cf2991-b50b-4833-ac01-17dec481ca60\") " pod="openshift-marketplace/community-operators-bk6qr" Jan 23 07:01:25 crc kubenswrapper[4937]: I0123 07:01:25.528414 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64cf2991-b50b-4833-ac01-17dec481ca60-utilities\") pod \"community-operators-bk6qr\" (UID: \"64cf2991-b50b-4833-ac01-17dec481ca60\") " pod="openshift-marketplace/community-operators-bk6qr" Jan 23 07:01:25 crc kubenswrapper[4937]: I0123 07:01:25.528484 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64cf2991-b50b-4833-ac01-17dec481ca60-catalog-content\") pod \"community-operators-bk6qr\" (UID: \"64cf2991-b50b-4833-ac01-17dec481ca60\") " pod="openshift-marketplace/community-operators-bk6qr" Jan 23 07:01:25 crc kubenswrapper[4937]: I0123 07:01:25.528635 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5csm\" (UniqueName: \"kubernetes.io/projected/64cf2991-b50b-4833-ac01-17dec481ca60-kube-api-access-n5csm\") pod \"community-operators-bk6qr\" (UID: \"64cf2991-b50b-4833-ac01-17dec481ca60\") " pod="openshift-marketplace/community-operators-bk6qr" Jan 23 07:01:25 crc kubenswrapper[4937]: I0123 07:01:25.528962 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64cf2991-b50b-4833-ac01-17dec481ca60-utilities\") pod \"community-operators-bk6qr\" (UID: \"64cf2991-b50b-4833-ac01-17dec481ca60\") " pod="openshift-marketplace/community-operators-bk6qr" Jan 23 07:01:25 crc kubenswrapper[4937]: I0123 07:01:25.529205 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64cf2991-b50b-4833-ac01-17dec481ca60-catalog-content\") pod \"community-operators-bk6qr\" (UID: \"64cf2991-b50b-4833-ac01-17dec481ca60\") " pod="openshift-marketplace/community-operators-bk6qr" Jan 23 07:01:25 crc kubenswrapper[4937]: I0123 07:01:25.554784 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5csm\" (UniqueName: \"kubernetes.io/projected/64cf2991-b50b-4833-ac01-17dec481ca60-kube-api-access-n5csm\") pod \"community-operators-bk6qr\" (UID: \"64cf2991-b50b-4833-ac01-17dec481ca60\") " pod="openshift-marketplace/community-operators-bk6qr" Jan 23 07:01:25 crc kubenswrapper[4937]: I0123 07:01:25.674661 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk6qr" Jan 23 07:01:26 crc kubenswrapper[4937]: I0123 07:01:26.188885 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bk6qr"] Jan 23 07:01:26 crc kubenswrapper[4937]: I0123 07:01:26.463173 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk6qr" event={"ID":"64cf2991-b50b-4833-ac01-17dec481ca60","Type":"ContainerStarted","Data":"e968f01e61104acdda42957ec8d397d380f32d434c6bdfb26344ed5c0d064dfe"} Jan 23 07:01:27 crc kubenswrapper[4937]: I0123 07:01:27.476153 4937 generic.go:334] "Generic (PLEG): container finished" podID="64cf2991-b50b-4833-ac01-17dec481ca60" containerID="7f1b93fe90488b32758cdc9cc4b4a6fe8e677814fe0b64fb5b4ba3af81909bfd" exitCode=0 Jan 23 07:01:27 crc kubenswrapper[4937]: I0123 07:01:27.476216 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk6qr" event={"ID":"64cf2991-b50b-4833-ac01-17dec481ca60","Type":"ContainerDied","Data":"7f1b93fe90488b32758cdc9cc4b4a6fe8e677814fe0b64fb5b4ba3af81909bfd"} Jan 23 07:01:29 crc kubenswrapper[4937]: I0123 07:01:29.505329 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk6qr" event={"ID":"64cf2991-b50b-4833-ac01-17dec481ca60","Type":"ContainerStarted","Data":"30e6a4719dab413d9db40d55fd491eaf1c02f2411326eaf3813ccc22bffadd29"} Jan 23 07:01:30 crc kubenswrapper[4937]: I0123 07:01:30.523292 4937 generic.go:334] "Generic (PLEG): container finished" podID="64cf2991-b50b-4833-ac01-17dec481ca60" containerID="30e6a4719dab413d9db40d55fd491eaf1c02f2411326eaf3813ccc22bffadd29" exitCode=0 Jan 23 07:01:30 crc kubenswrapper[4937]: I0123 07:01:30.523453 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk6qr" event={"ID":"64cf2991-b50b-4833-ac01-17dec481ca60","Type":"ContainerDied","Data":"30e6a4719dab413d9db40d55fd491eaf1c02f2411326eaf3813ccc22bffadd29"} Jan 23 07:01:35 crc kubenswrapper[4937]: I0123 07:01:35.579350 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk6qr" event={"ID":"64cf2991-b50b-4833-ac01-17dec481ca60","Type":"ContainerStarted","Data":"716a6d3f8e86aac569049bb05337fdb6f492cff147090d324df9d497abcd1615"} Jan 23 07:01:35 crc kubenswrapper[4937]: I0123 07:01:35.613280 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bk6qr" podStartSLOduration=2.813083571 podStartE2EDuration="10.613264016s" podCreationTimestamp="2026-01-23 07:01:25 +0000 UTC" firstStartedPulling="2026-01-23 07:01:27.478892178 +0000 UTC m=+1687.282658861" lastFinishedPulling="2026-01-23 07:01:35.279072653 +0000 UTC m=+1695.082839306" observedRunningTime="2026-01-23 07:01:35.612067094 +0000 UTC m=+1695.415833807" watchObservedRunningTime="2026-01-23 07:01:35.613264016 +0000 UTC m=+1695.417030669" Jan 23 07:01:35 crc kubenswrapper[4937]: I0123 07:01:35.674980 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bk6qr" Jan 23 07:01:35 crc kubenswrapper[4937]: I0123 07:01:35.675055 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bk6qr" Jan 23 07:01:36 crc kubenswrapper[4937]: I0123 07:01:36.727001 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-bk6qr" podUID="64cf2991-b50b-4833-ac01-17dec481ca60" containerName="registry-server" probeResult="failure" output=< Jan 23 07:01:36 crc kubenswrapper[4937]: timeout: failed to connect service ":50051" within 1s Jan 23 07:01:36 crc kubenswrapper[4937]: > Jan 23 07:01:37 crc kubenswrapper[4937]: I0123 07:01:37.723841 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:01:37 crc kubenswrapper[4937]: I0123 07:01:37.724250 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:01:45 crc kubenswrapper[4937]: I0123 07:01:45.759742 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bk6qr" Jan 23 07:01:45 crc kubenswrapper[4937]: I0123 07:01:45.831640 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bk6qr" Jan 23 07:01:46 crc kubenswrapper[4937]: I0123 07:01:46.001145 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bk6qr"] Jan 23 07:01:47 crc kubenswrapper[4937]: I0123 07:01:47.706105 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bk6qr" podUID="64cf2991-b50b-4833-ac01-17dec481ca60" containerName="registry-server" containerID="cri-o://716a6d3f8e86aac569049bb05337fdb6f492cff147090d324df9d497abcd1615" gracePeriod=2 Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.183172 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk6qr" Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.357001 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64cf2991-b50b-4833-ac01-17dec481ca60-catalog-content\") pod \"64cf2991-b50b-4833-ac01-17dec481ca60\" (UID: \"64cf2991-b50b-4833-ac01-17dec481ca60\") " Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.357088 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5csm\" (UniqueName: \"kubernetes.io/projected/64cf2991-b50b-4833-ac01-17dec481ca60-kube-api-access-n5csm\") pod \"64cf2991-b50b-4833-ac01-17dec481ca60\" (UID: \"64cf2991-b50b-4833-ac01-17dec481ca60\") " Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.357815 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64cf2991-b50b-4833-ac01-17dec481ca60-utilities\") pod \"64cf2991-b50b-4833-ac01-17dec481ca60\" (UID: \"64cf2991-b50b-4833-ac01-17dec481ca60\") " Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.358842 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64cf2991-b50b-4833-ac01-17dec481ca60-utilities" (OuterVolumeSpecName: "utilities") pod "64cf2991-b50b-4833-ac01-17dec481ca60" (UID: "64cf2991-b50b-4833-ac01-17dec481ca60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.366324 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64cf2991-b50b-4833-ac01-17dec481ca60-kube-api-access-n5csm" (OuterVolumeSpecName: "kube-api-access-n5csm") pod "64cf2991-b50b-4833-ac01-17dec481ca60" (UID: "64cf2991-b50b-4833-ac01-17dec481ca60"). InnerVolumeSpecName "kube-api-access-n5csm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.440627 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64cf2991-b50b-4833-ac01-17dec481ca60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64cf2991-b50b-4833-ac01-17dec481ca60" (UID: "64cf2991-b50b-4833-ac01-17dec481ca60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.460894 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64cf2991-b50b-4833-ac01-17dec481ca60-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.460918 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5csm\" (UniqueName: \"kubernetes.io/projected/64cf2991-b50b-4833-ac01-17dec481ca60-kube-api-access-n5csm\") on node \"crc\" DevicePath \"\"" Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.460931 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64cf2991-b50b-4833-ac01-17dec481ca60-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.724377 4937 generic.go:334] "Generic (PLEG): container finished" podID="64cf2991-b50b-4833-ac01-17dec481ca60" containerID="716a6d3f8e86aac569049bb05337fdb6f492cff147090d324df9d497abcd1615" exitCode=0 Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.724464 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk6qr" event={"ID":"64cf2991-b50b-4833-ac01-17dec481ca60","Type":"ContainerDied","Data":"716a6d3f8e86aac569049bb05337fdb6f492cff147090d324df9d497abcd1615"} Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.724939 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk6qr" event={"ID":"64cf2991-b50b-4833-ac01-17dec481ca60","Type":"ContainerDied","Data":"e968f01e61104acdda42957ec8d397d380f32d434c6bdfb26344ed5c0d064dfe"} Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.724984 4937 scope.go:117] "RemoveContainer" containerID="716a6d3f8e86aac569049bb05337fdb6f492cff147090d324df9d497abcd1615" Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.724493 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk6qr" Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.780551 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bk6qr"] Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.785613 4937 scope.go:117] "RemoveContainer" containerID="30e6a4719dab413d9db40d55fd491eaf1c02f2411326eaf3813ccc22bffadd29" Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.800814 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bk6qr"] Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.823577 4937 scope.go:117] "RemoveContainer" containerID="7f1b93fe90488b32758cdc9cc4b4a6fe8e677814fe0b64fb5b4ba3af81909bfd" Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.874815 4937 scope.go:117] "RemoveContainer" containerID="716a6d3f8e86aac569049bb05337fdb6f492cff147090d324df9d497abcd1615" Jan 23 07:01:48 crc kubenswrapper[4937]: E0123 07:01:48.875452 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"716a6d3f8e86aac569049bb05337fdb6f492cff147090d324df9d497abcd1615\": container with ID starting with 716a6d3f8e86aac569049bb05337fdb6f492cff147090d324df9d497abcd1615 not found: ID does not exist" containerID="716a6d3f8e86aac569049bb05337fdb6f492cff147090d324df9d497abcd1615" Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.875506 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"716a6d3f8e86aac569049bb05337fdb6f492cff147090d324df9d497abcd1615"} err="failed to get container status \"716a6d3f8e86aac569049bb05337fdb6f492cff147090d324df9d497abcd1615\": rpc error: code = NotFound desc = could not find container \"716a6d3f8e86aac569049bb05337fdb6f492cff147090d324df9d497abcd1615\": container with ID starting with 716a6d3f8e86aac569049bb05337fdb6f492cff147090d324df9d497abcd1615 not found: ID does not exist" Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.875539 4937 scope.go:117] "RemoveContainer" containerID="30e6a4719dab413d9db40d55fd491eaf1c02f2411326eaf3813ccc22bffadd29" Jan 23 07:01:48 crc kubenswrapper[4937]: E0123 07:01:48.876095 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30e6a4719dab413d9db40d55fd491eaf1c02f2411326eaf3813ccc22bffadd29\": container with ID starting with 30e6a4719dab413d9db40d55fd491eaf1c02f2411326eaf3813ccc22bffadd29 not found: ID does not exist" containerID="30e6a4719dab413d9db40d55fd491eaf1c02f2411326eaf3813ccc22bffadd29" Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.876138 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e6a4719dab413d9db40d55fd491eaf1c02f2411326eaf3813ccc22bffadd29"} err="failed to get container status \"30e6a4719dab413d9db40d55fd491eaf1c02f2411326eaf3813ccc22bffadd29\": rpc error: code = NotFound desc = could not find container \"30e6a4719dab413d9db40d55fd491eaf1c02f2411326eaf3813ccc22bffadd29\": container with ID starting with 30e6a4719dab413d9db40d55fd491eaf1c02f2411326eaf3813ccc22bffadd29 not found: ID does not exist" Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.876168 4937 scope.go:117] "RemoveContainer" containerID="7f1b93fe90488b32758cdc9cc4b4a6fe8e677814fe0b64fb5b4ba3af81909bfd" Jan 23 07:01:48 crc kubenswrapper[4937]: E0123 07:01:48.876580 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f1b93fe90488b32758cdc9cc4b4a6fe8e677814fe0b64fb5b4ba3af81909bfd\": container with ID starting with 7f1b93fe90488b32758cdc9cc4b4a6fe8e677814fe0b64fb5b4ba3af81909bfd not found: ID does not exist" containerID="7f1b93fe90488b32758cdc9cc4b4a6fe8e677814fe0b64fb5b4ba3af81909bfd" Jan 23 07:01:48 crc kubenswrapper[4937]: I0123 07:01:48.876624 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f1b93fe90488b32758cdc9cc4b4a6fe8e677814fe0b64fb5b4ba3af81909bfd"} err="failed to get container status \"7f1b93fe90488b32758cdc9cc4b4a6fe8e677814fe0b64fb5b4ba3af81909bfd\": rpc error: code = NotFound desc = could not find container \"7f1b93fe90488b32758cdc9cc4b4a6fe8e677814fe0b64fb5b4ba3af81909bfd\": container with ID starting with 7f1b93fe90488b32758cdc9cc4b4a6fe8e677814fe0b64fb5b4ba3af81909bfd not found: ID does not exist" Jan 23 07:01:50 crc kubenswrapper[4937]: I0123 07:01:50.541058 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64cf2991-b50b-4833-ac01-17dec481ca60" path="/var/lib/kubelet/pods/64cf2991-b50b-4833-ac01-17dec481ca60/volumes" Jan 23 07:02:05 crc kubenswrapper[4937]: I0123 07:02:05.756452 4937 scope.go:117] "RemoveContainer" containerID="3329a23cd43a4e10e8d5b714f5397f73f747d24c21e798071af624184c0f400c" Jan 23 07:02:05 crc kubenswrapper[4937]: I0123 07:02:05.791820 4937 scope.go:117] "RemoveContainer" containerID="27d91b3555818bd81e643566c8a240ed3bcd084c150197ae71cf98d20293c7d6" Jan 23 07:02:07 crc kubenswrapper[4937]: I0123 07:02:07.724466 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:02:07 crc kubenswrapper[4937]: I0123 07:02:07.724891 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:02:07 crc kubenswrapper[4937]: I0123 07:02:07.724957 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 07:02:07 crc kubenswrapper[4937]: I0123 07:02:07.726077 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:02:07 crc kubenswrapper[4937]: I0123 07:02:07.726188 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" gracePeriod=600 Jan 23 07:02:07 crc kubenswrapper[4937]: E0123 07:02:07.851409 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:02:07 crc kubenswrapper[4937]: I0123 07:02:07.945925 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" exitCode=0 Jan 23 07:02:07 crc kubenswrapper[4937]: I0123 07:02:07.945984 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a"} Jan 23 07:02:07 crc kubenswrapper[4937]: I0123 07:02:07.946051 4937 scope.go:117] "RemoveContainer" containerID="a10facfc39aee45969b033e840bb8f688c490ef6b6d364e9ea5f238aa72a3af0" Jan 23 07:02:07 crc kubenswrapper[4937]: I0123 07:02:07.947219 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:02:07 crc kubenswrapper[4937]: E0123 07:02:07.948069 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:02:18 crc kubenswrapper[4937]: I0123 07:02:18.527651 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:02:18 crc kubenswrapper[4937]: E0123 07:02:18.528940 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:02:29 crc kubenswrapper[4937]: I0123 07:02:29.526428 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:02:29 crc kubenswrapper[4937]: E0123 07:02:29.527529 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:02:43 crc kubenswrapper[4937]: I0123 07:02:43.526762 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:02:43 crc kubenswrapper[4937]: E0123 07:02:43.527502 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:02:55 crc kubenswrapper[4937]: I0123 07:02:55.527061 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:02:55 crc kubenswrapper[4937]: E0123 07:02:55.527992 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:03:09 crc kubenswrapper[4937]: I0123 07:03:09.060837 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-wcq4t"] Jan 23 07:03:09 crc kubenswrapper[4937]: I0123 07:03:09.070249 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-gc5jw"] Jan 23 07:03:09 crc kubenswrapper[4937]: I0123 07:03:09.085087 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3b31-account-create-update-4tdq9"] Jan 23 07:03:09 crc kubenswrapper[4937]: I0123 07:03:09.093984 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2f8d-account-create-update-4bzd2"] Jan 23 07:03:09 crc kubenswrapper[4937]: I0123 07:03:09.101770 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-wcq4t"] Jan 23 07:03:09 crc kubenswrapper[4937]: I0123 07:03:09.110694 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3b31-account-create-update-4tdq9"] Jan 23 07:03:09 crc kubenswrapper[4937]: I0123 07:03:09.120020 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-gc5jw"] Jan 23 07:03:09 crc kubenswrapper[4937]: I0123 07:03:09.128985 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2f8d-account-create-update-4bzd2"] Jan 23 07:03:09 crc kubenswrapper[4937]: I0123 07:03:09.528561 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:03:09 crc kubenswrapper[4937]: E0123 07:03:09.528895 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:03:10 crc kubenswrapper[4937]: I0123 07:03:10.538931 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27668306-a075-4ea7-a06c-05b81239ec08" path="/var/lib/kubelet/pods/27668306-a075-4ea7-a06c-05b81239ec08/volumes" Jan 23 07:03:10 crc kubenswrapper[4937]: I0123 07:03:10.541140 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae36acc6-0eaa-434a-bc7c-067729b8b888" path="/var/lib/kubelet/pods/ae36acc6-0eaa-434a-bc7c-067729b8b888/volumes" Jan 23 07:03:10 crc kubenswrapper[4937]: I0123 07:03:10.542164 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcc194a9-e4ca-43fa-a46c-da9090073151" path="/var/lib/kubelet/pods/bcc194a9-e4ca-43fa-a46c-da9090073151/volumes" Jan 23 07:03:10 crc kubenswrapper[4937]: I0123 07:03:10.543242 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f45e07a8-7bf3-4d38-9ef2-169a76dcf129" path="/var/lib/kubelet/pods/f45e07a8-7bf3-4d38-9ef2-169a76dcf129/volumes" Jan 23 07:03:11 crc kubenswrapper[4937]: I0123 07:03:11.030154 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-nnvvn"] Jan 23 07:03:11 crc kubenswrapper[4937]: I0123 07:03:11.040424 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-54b1-account-create-update-8xcgx"] Jan 23 07:03:11 crc kubenswrapper[4937]: I0123 07:03:11.049449 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-nnvvn"] Jan 23 07:03:11 crc kubenswrapper[4937]: I0123 07:03:11.058287 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-54b1-account-create-update-8xcgx"] Jan 23 07:03:12 crc kubenswrapper[4937]: I0123 07:03:12.538966 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a9dea0e-3304-42ee-92b2-c7d39db20bba" path="/var/lib/kubelet/pods/5a9dea0e-3304-42ee-92b2-c7d39db20bba/volumes" Jan 23 07:03:12 crc kubenswrapper[4937]: I0123 07:03:12.539784 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc26b8a1-b928-42ae-b7fd-385e4cce725c" path="/var/lib/kubelet/pods/fc26b8a1-b928-42ae-b7fd-385e4cce725c/volumes" Jan 23 07:03:20 crc kubenswrapper[4937]: I0123 07:03:20.555983 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:03:20 crc kubenswrapper[4937]: E0123 07:03:20.557671 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:03:32 crc kubenswrapper[4937]: I0123 07:03:32.528255 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:03:32 crc kubenswrapper[4937]: E0123 07:03:32.529549 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:03:38 crc kubenswrapper[4937]: I0123 07:03:38.886194 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" event={"ID":"00506369-00e0-43c2-be00-23b30a785c87","Type":"ContainerDied","Data":"0192fb0ba05bd370cb59fb54f3f89f4ff4210a9be2165a113a947247f02db6ac"} Jan 23 07:03:38 crc kubenswrapper[4937]: I0123 07:03:38.886191 4937 generic.go:334] "Generic (PLEG): container finished" podID="00506369-00e0-43c2-be00-23b30a785c87" containerID="0192fb0ba05bd370cb59fb54f3f89f4ff4210a9be2165a113a947247f02db6ac" exitCode=0 Jan 23 07:03:40 crc kubenswrapper[4937]: I0123 07:03:40.362697 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" Jan 23 07:03:40 crc kubenswrapper[4937]: I0123 07:03:40.462273 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7n79\" (UniqueName: \"kubernetes.io/projected/00506369-00e0-43c2-be00-23b30a785c87-kube-api-access-s7n79\") pod \"00506369-00e0-43c2-be00-23b30a785c87\" (UID: \"00506369-00e0-43c2-be00-23b30a785c87\") " Jan 23 07:03:40 crc kubenswrapper[4937]: I0123 07:03:40.463196 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00506369-00e0-43c2-be00-23b30a785c87-ssh-key-openstack-edpm-ipam\") pod \"00506369-00e0-43c2-be00-23b30a785c87\" (UID: \"00506369-00e0-43c2-be00-23b30a785c87\") " Jan 23 07:03:40 crc kubenswrapper[4937]: I0123 07:03:40.463325 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00506369-00e0-43c2-be00-23b30a785c87-inventory\") pod \"00506369-00e0-43c2-be00-23b30a785c87\" (UID: \"00506369-00e0-43c2-be00-23b30a785c87\") " Jan 23 07:03:40 crc kubenswrapper[4937]: I0123 07:03:40.463630 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00506369-00e0-43c2-be00-23b30a785c87-bootstrap-combined-ca-bundle\") pod \"00506369-00e0-43c2-be00-23b30a785c87\" (UID: \"00506369-00e0-43c2-be00-23b30a785c87\") " Jan 23 07:03:40 crc kubenswrapper[4937]: I0123 07:03:40.471520 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00506369-00e0-43c2-be00-23b30a785c87-kube-api-access-s7n79" (OuterVolumeSpecName: "kube-api-access-s7n79") pod "00506369-00e0-43c2-be00-23b30a785c87" (UID: "00506369-00e0-43c2-be00-23b30a785c87"). InnerVolumeSpecName "kube-api-access-s7n79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:03:40 crc kubenswrapper[4937]: I0123 07:03:40.471525 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00506369-00e0-43c2-be00-23b30a785c87-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "00506369-00e0-43c2-be00-23b30a785c87" (UID: "00506369-00e0-43c2-be00-23b30a785c87"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:03:40 crc kubenswrapper[4937]: I0123 07:03:40.492202 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00506369-00e0-43c2-be00-23b30a785c87-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "00506369-00e0-43c2-be00-23b30a785c87" (UID: "00506369-00e0-43c2-be00-23b30a785c87"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:03:40 crc kubenswrapper[4937]: I0123 07:03:40.497793 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00506369-00e0-43c2-be00-23b30a785c87-inventory" (OuterVolumeSpecName: "inventory") pod "00506369-00e0-43c2-be00-23b30a785c87" (UID: "00506369-00e0-43c2-be00-23b30a785c87"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:03:40 crc kubenswrapper[4937]: I0123 07:03:40.566345 4937 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00506369-00e0-43c2-be00-23b30a785c87-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:03:40 crc kubenswrapper[4937]: I0123 07:03:40.566545 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7n79\" (UniqueName: \"kubernetes.io/projected/00506369-00e0-43c2-be00-23b30a785c87-kube-api-access-s7n79\") on node \"crc\" DevicePath \"\"" Jan 23 07:03:40 crc kubenswrapper[4937]: I0123 07:03:40.566624 4937 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00506369-00e0-43c2-be00-23b30a785c87-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:03:40 crc kubenswrapper[4937]: I0123 07:03:40.566721 4937 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00506369-00e0-43c2-be00-23b30a785c87-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 07:03:40 crc kubenswrapper[4937]: I0123 07:03:40.912238 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" event={"ID":"00506369-00e0-43c2-be00-23b30a785c87","Type":"ContainerDied","Data":"bc7f6988617492628fef8634e6cf2f6e4da5a59ad0067000bff7ea6361d90197"} Jan 23 07:03:40 crc kubenswrapper[4937]: I0123 07:03:40.912504 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc7f6988617492628fef8634e6cf2f6e4da5a59ad0067000bff7ea6361d90197" Jan 23 07:03:40 crc kubenswrapper[4937]: I0123 07:03:40.912304 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.013248 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj"] Jan 23 07:03:41 crc kubenswrapper[4937]: E0123 07:03:41.014164 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64cf2991-b50b-4833-ac01-17dec481ca60" containerName="registry-server" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.014220 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="64cf2991-b50b-4833-ac01-17dec481ca60" containerName="registry-server" Jan 23 07:03:41 crc kubenswrapper[4937]: E0123 07:03:41.014253 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00506369-00e0-43c2-be00-23b30a785c87" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.014273 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="00506369-00e0-43c2-be00-23b30a785c87" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 07:03:41 crc kubenswrapper[4937]: E0123 07:03:41.014341 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64cf2991-b50b-4833-ac01-17dec481ca60" containerName="extract-utilities" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.014358 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="64cf2991-b50b-4833-ac01-17dec481ca60" containerName="extract-utilities" Jan 23 07:03:41 crc kubenswrapper[4937]: E0123 07:03:41.014402 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64cf2991-b50b-4833-ac01-17dec481ca60" containerName="extract-content" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.014418 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="64cf2991-b50b-4833-ac01-17dec481ca60" containerName="extract-content" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.014832 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="00506369-00e0-43c2-be00-23b30a785c87" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.014921 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="64cf2991-b50b-4833-ac01-17dec481ca60" containerName="registry-server" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.016343 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.020361 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.020418 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.021013 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.021450 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rm45q" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.024355 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj"] Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.076407 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc4xt\" (UniqueName: \"kubernetes.io/projected/4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89-kube-api-access-lc4xt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hljnj\" (UID: \"4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.076505 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hljnj\" (UID: \"4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.076784 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hljnj\" (UID: \"4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.178656 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc4xt\" (UniqueName: \"kubernetes.io/projected/4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89-kube-api-access-lc4xt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hljnj\" (UID: \"4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.178804 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hljnj\" (UID: \"4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.178860 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hljnj\" (UID: \"4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.184223 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hljnj\" (UID: \"4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.184361 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hljnj\" (UID: \"4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.198502 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc4xt\" (UniqueName: \"kubernetes.io/projected/4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89-kube-api-access-lc4xt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hljnj\" (UID: \"4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.343520 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj" Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.872284 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj"] Jan 23 07:03:41 crc kubenswrapper[4937]: I0123 07:03:41.920835 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj" event={"ID":"4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89","Type":"ContainerStarted","Data":"52793eacf7050a83c08cbd7a6cb6283e982710406611bbe7f4b0ae42e76bf3ba"} Jan 23 07:03:42 crc kubenswrapper[4937]: I0123 07:03:42.943655 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj" event={"ID":"4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89","Type":"ContainerStarted","Data":"e3e46cd5d9578f084b197cce83e7f50195529ed6693a82921fec3307c98d419c"} Jan 23 07:03:42 crc kubenswrapper[4937]: I0123 07:03:42.971963 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj" podStartSLOduration=2.451904175 podStartE2EDuration="2.971945969s" podCreationTimestamp="2026-01-23 07:03:40 +0000 UTC" firstStartedPulling="2026-01-23 07:03:41.870055849 +0000 UTC m=+1821.673822502" lastFinishedPulling="2026-01-23 07:03:42.390097643 +0000 UTC m=+1822.193864296" observedRunningTime="2026-01-23 07:03:42.966940864 +0000 UTC m=+1822.770707517" watchObservedRunningTime="2026-01-23 07:03:42.971945969 +0000 UTC m=+1822.775712612" Jan 23 07:03:44 crc kubenswrapper[4937]: I0123 07:03:44.529563 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:03:44 crc kubenswrapper[4937]: E0123 07:03:44.530133 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:03:58 crc kubenswrapper[4937]: I0123 07:03:58.527726 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:03:58 crc kubenswrapper[4937]: E0123 07:03:58.529305 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:04:06 crc kubenswrapper[4937]: I0123 07:04:06.149433 4937 scope.go:117] "RemoveContainer" containerID="0adc1cb4667efb72fd811085288b51db0cbf5374bd6eb426f80b143b0e3072ae" Jan 23 07:04:06 crc kubenswrapper[4937]: I0123 07:04:06.183323 4937 scope.go:117] "RemoveContainer" containerID="2950a37cdedee3935d72518cb33c511d33eb3a15c4b54de346f021b84fada86d" Jan 23 07:04:06 crc kubenswrapper[4937]: I0123 07:04:06.262239 4937 scope.go:117] "RemoveContainer" containerID="099473046a862d14ff6fcdd759485d110ba6dc14ce46d611f97221058cf45285" Jan 23 07:04:06 crc kubenswrapper[4937]: I0123 07:04:06.305313 4937 scope.go:117] "RemoveContainer" containerID="14b704ce5d3d0358f39246949cfa3e8fae8902aa712e1431939928a447e73302" Jan 23 07:04:06 crc kubenswrapper[4937]: I0123 07:04:06.353605 4937 scope.go:117] "RemoveContainer" containerID="b996e6a162c54accf3017376c131f8824f7cec146dc381fc8506b7c985de1eb2" Jan 23 07:04:06 crc kubenswrapper[4937]: I0123 07:04:06.390631 4937 scope.go:117] "RemoveContainer" containerID="bd320fa4a8d387736df2cf963e92cb08ebbc179cac4ba082b8d622ac9291ef11" Jan 23 07:04:06 crc kubenswrapper[4937]: I0123 07:04:06.434284 4937 scope.go:117] "RemoveContainer" containerID="babe0c5078136a72d4bc9fb5fa6af8982430b26359482becdbf1366f0e5c8322" Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.065181 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d9e4-account-create-update-5v6g8"] Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.087005 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-xkzmc"] Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.098999 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-52bq2"] Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.115253 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-17d2-account-create-update-xn6kn"] Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.127511 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-sw22l"] Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.138977 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d9e4-account-create-update-5v6g8"] Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.149146 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-qj42j"] Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.159781 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-52bq2"] Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.168562 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d8ad-account-create-update-csqmj"] Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.177259 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-17d2-account-create-update-xn6kn"] Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.187459 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-qj42j"] Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.197276 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-xkzmc"] Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.206315 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-sw22l"] Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.214284 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-d8ad-account-create-update-csqmj"] Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.223625 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5a85-account-create-update-5ztfz"] Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.232930 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5a85-account-create-update-5ztfz"] Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.542498 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25d78873-02fa-4d85-acc3-2b16a2a64d1d" path="/var/lib/kubelet/pods/25d78873-02fa-4d85-acc3-2b16a2a64d1d/volumes" Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.544230 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59ca4133-99d3-4a83-9f7c-7c033fb97be9" path="/var/lib/kubelet/pods/59ca4133-99d3-4a83-9f7c-7c033fb97be9/volumes" Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.545138 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="984b99dd-0126-409e-b676-fb8d7c21dac5" path="/var/lib/kubelet/pods/984b99dd-0126-409e-b676-fb8d7c21dac5/volumes" Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.546741 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acd308e9-2acb-4147-832b-54ef709110b9" path="/var/lib/kubelet/pods/acd308e9-2acb-4147-832b-54ef709110b9/volumes" Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.549360 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1bd9db7-c05b-439f-b15c-65c07379eed1" path="/var/lib/kubelet/pods/c1bd9db7-c05b-439f-b15c-65c07379eed1/volumes" Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.550328 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2d2864f-0eb5-49f7-82f5-7101a4e794b4" path="/var/lib/kubelet/pods/d2d2864f-0eb5-49f7-82f5-7101a4e794b4/volumes" Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.551508 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec7f8b93-b708-40a9-ab16-38829a84d0c8" path="/var/lib/kubelet/pods/ec7f8b93-b708-40a9-ab16-38829a84d0c8/volumes" Jan 23 07:04:08 crc kubenswrapper[4937]: I0123 07:04:08.553039 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f42c87e6-6330-47b7-bc7c-29a4d469f783" path="/var/lib/kubelet/pods/f42c87e6-6330-47b7-bc7c-29a4d469f783/volumes" Jan 23 07:04:11 crc kubenswrapper[4937]: I0123 07:04:11.525997 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:04:11 crc kubenswrapper[4937]: E0123 07:04:11.526280 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:04:14 crc kubenswrapper[4937]: I0123 07:04:14.047792 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-gq9c7"] Jan 23 07:04:14 crc kubenswrapper[4937]: I0123 07:04:14.068670 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-hmfzs"] Jan 23 07:04:14 crc kubenswrapper[4937]: I0123 07:04:14.106087 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-gq9c7"] Jan 23 07:04:14 crc kubenswrapper[4937]: I0123 07:04:14.120195 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-hmfzs"] Jan 23 07:04:14 crc kubenswrapper[4937]: I0123 07:04:14.543017 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d0685ee-4785-4cba-906e-1dc96462dfd8" path="/var/lib/kubelet/pods/4d0685ee-4785-4cba-906e-1dc96462dfd8/volumes" Jan 23 07:04:14 crc kubenswrapper[4937]: I0123 07:04:14.544763 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93fa8840-692c-42de-9ee7-90b9a399aff7" path="/var/lib/kubelet/pods/93fa8840-692c-42de-9ee7-90b9a399aff7/volumes" Jan 23 07:04:26 crc kubenswrapper[4937]: I0123 07:04:26.526298 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:04:26 crc kubenswrapper[4937]: E0123 07:04:26.527014 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:04:37 crc kubenswrapper[4937]: I0123 07:04:37.527257 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:04:37 crc kubenswrapper[4937]: E0123 07:04:37.528329 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:04:50 crc kubenswrapper[4937]: I0123 07:04:50.540508 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:04:50 crc kubenswrapper[4937]: E0123 07:04:50.541796 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:05:03 crc kubenswrapper[4937]: I0123 07:05:03.526685 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:05:03 crc kubenswrapper[4937]: E0123 07:05:03.528156 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:05:06 crc kubenswrapper[4937]: I0123 07:05:06.612732 4937 scope.go:117] "RemoveContainer" containerID="30e9b3c28bc99f50d1eef034a2f9341c33ded7490074ebec69187e9aaeaddd4e" Jan 23 07:05:06 crc kubenswrapper[4937]: I0123 07:05:06.641040 4937 scope.go:117] "RemoveContainer" containerID="e692c9efcdc01b568ddb350ff63f9e54d158272e124bc8505fc6b438247d9fb1" Jan 23 07:05:06 crc kubenswrapper[4937]: I0123 07:05:06.711434 4937 scope.go:117] "RemoveContainer" containerID="de60b485dbd6634834b74fb8978b0c9f338cfc7f6e80244eb5fbd6bc8d7aedf2" Jan 23 07:05:06 crc kubenswrapper[4937]: I0123 07:05:06.755427 4937 scope.go:117] "RemoveContainer" containerID="10ec3343b9128a3503a13bfc72b8b41afea597c1ab7737b33dda9e1f6a6b6a12" Jan 23 07:05:06 crc kubenswrapper[4937]: I0123 07:05:06.801565 4937 scope.go:117] "RemoveContainer" containerID="55130bd7f8b8cecacc529d513613abdcac6d6b6b3df67d95735ae5ff728e8d08" Jan 23 07:05:06 crc kubenswrapper[4937]: I0123 07:05:06.845661 4937 scope.go:117] "RemoveContainer" containerID="d317ec48fc97a8c1103f439a86a4d038cbabbad6968c865962a23f624856cd2b" Jan 23 07:05:06 crc kubenswrapper[4937]: I0123 07:05:06.893474 4937 scope.go:117] "RemoveContainer" containerID="d40e35dade476c123ac0b49287dd7cf102fa07d297f806d70e4dea9011467907" Jan 23 07:05:06 crc kubenswrapper[4937]: I0123 07:05:06.923415 4937 scope.go:117] "RemoveContainer" containerID="09174f85ccdc72a1ae0351d22088a9f9281ac8a3b21d95fcb7d0cbead915f57e" Jan 23 07:05:06 crc kubenswrapper[4937]: I0123 07:05:06.948393 4937 scope.go:117] "RemoveContainer" containerID="dffc13c4bf8096996071d4d1eb83956201f8e5e118bddfa51a5bad9c6eee6f4f" Jan 23 07:05:06 crc kubenswrapper[4937]: I0123 07:05:06.971227 4937 scope.go:117] "RemoveContainer" containerID="376b59faf6b28db368074e5a711f7e5fb359eae800eb9a1d528ae5d1e44899d2" Jan 23 07:05:09 crc kubenswrapper[4937]: I0123 07:05:09.082007 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dwbr9"] Jan 23 07:05:09 crc kubenswrapper[4937]: I0123 07:05:09.094140 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-dwbr9"] Jan 23 07:05:10 crc kubenswrapper[4937]: I0123 07:05:10.544068 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2caeb6b9-b501-4ac8-9f28-e3b5a40049cc" path="/var/lib/kubelet/pods/2caeb6b9-b501-4ac8-9f28-e3b5a40049cc/volumes" Jan 23 07:05:11 crc kubenswrapper[4937]: I0123 07:05:11.039451 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-srnpk"] Jan 23 07:05:11 crc kubenswrapper[4937]: I0123 07:05:11.053336 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-srnpk"] Jan 23 07:05:12 crc kubenswrapper[4937]: I0123 07:05:12.559063 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deb165f5-08a0-404a-88b3-f89dc3195c28" path="/var/lib/kubelet/pods/deb165f5-08a0-404a-88b3-f89dc3195c28/volumes" Jan 23 07:05:17 crc kubenswrapper[4937]: I0123 07:05:17.526935 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:05:17 crc kubenswrapper[4937]: E0123 07:05:17.527746 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:05:28 crc kubenswrapper[4937]: I0123 07:05:28.054765 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-gbhl4"] Jan 23 07:05:28 crc kubenswrapper[4937]: I0123 07:05:28.065243 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-gbhl4"] Jan 23 07:05:28 crc kubenswrapper[4937]: I0123 07:05:28.541058 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71dcc2c8-b02c-4203-9fa8-1af6e12615d4" path="/var/lib/kubelet/pods/71dcc2c8-b02c-4203-9fa8-1af6e12615d4/volumes" Jan 23 07:05:29 crc kubenswrapper[4937]: I0123 07:05:29.526732 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:05:29 crc kubenswrapper[4937]: E0123 07:05:29.527752 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:05:32 crc kubenswrapper[4937]: I0123 07:05:32.035953 4937 generic.go:334] "Generic (PLEG): container finished" podID="4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89" containerID="e3e46cd5d9578f084b197cce83e7f50195529ed6693a82921fec3307c98d419c" exitCode=0 Jan 23 07:05:32 crc kubenswrapper[4937]: I0123 07:05:32.036095 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj" event={"ID":"4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89","Type":"ContainerDied","Data":"e3e46cd5d9578f084b197cce83e7f50195529ed6693a82921fec3307c98d419c"} Jan 23 07:05:33 crc kubenswrapper[4937]: I0123 07:05:33.457523 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj" Jan 23 07:05:33 crc kubenswrapper[4937]: I0123 07:05:33.560453 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89-ssh-key-openstack-edpm-ipam\") pod \"4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89\" (UID: \"4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89\") " Jan 23 07:05:33 crc kubenswrapper[4937]: I0123 07:05:33.560667 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89-inventory\") pod \"4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89\" (UID: \"4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89\") " Jan 23 07:05:33 crc kubenswrapper[4937]: I0123 07:05:33.560840 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc4xt\" (UniqueName: \"kubernetes.io/projected/4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89-kube-api-access-lc4xt\") pod \"4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89\" (UID: \"4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89\") " Jan 23 07:05:33 crc kubenswrapper[4937]: I0123 07:05:33.566652 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89-kube-api-access-lc4xt" (OuterVolumeSpecName: "kube-api-access-lc4xt") pod "4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89" (UID: "4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89"). InnerVolumeSpecName "kube-api-access-lc4xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:05:33 crc kubenswrapper[4937]: I0123 07:05:33.592788 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89-inventory" (OuterVolumeSpecName: "inventory") pod "4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89" (UID: "4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:05:33 crc kubenswrapper[4937]: I0123 07:05:33.593043 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89" (UID: "4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:05:33 crc kubenswrapper[4937]: I0123 07:05:33.662246 4937 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 07:05:33 crc kubenswrapper[4937]: I0123 07:05:33.662277 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc4xt\" (UniqueName: \"kubernetes.io/projected/4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89-kube-api-access-lc4xt\") on node \"crc\" DevicePath \"\"" Jan 23 07:05:33 crc kubenswrapper[4937]: I0123 07:05:33.662291 4937 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.055930 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj" event={"ID":"4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89","Type":"ContainerDied","Data":"52793eacf7050a83c08cbd7a6cb6283e982710406611bbe7f4b0ae42e76bf3ba"} Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.056003 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52793eacf7050a83c08cbd7a6cb6283e982710406611bbe7f4b0ae42e76bf3ba" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.056013 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hljnj" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.148891 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9"] Jan 23 07:05:34 crc kubenswrapper[4937]: E0123 07:05:34.149551 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.149569 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.149768 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.151438 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.153447 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.153974 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.154066 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rm45q" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.154092 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.163193 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9"] Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.274123 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae86befe-47e5-4645-b432-24184f2ebca6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9\" (UID: \"ae86befe-47e5-4645-b432-24184f2ebca6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.274210 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae86befe-47e5-4645-b432-24184f2ebca6-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9\" (UID: \"ae86befe-47e5-4645-b432-24184f2ebca6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.274447 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nrqr\" (UniqueName: \"kubernetes.io/projected/ae86befe-47e5-4645-b432-24184f2ebca6-kube-api-access-7nrqr\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9\" (UID: \"ae86befe-47e5-4645-b432-24184f2ebca6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.376131 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae86befe-47e5-4645-b432-24184f2ebca6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9\" (UID: \"ae86befe-47e5-4645-b432-24184f2ebca6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.376222 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae86befe-47e5-4645-b432-24184f2ebca6-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9\" (UID: \"ae86befe-47e5-4645-b432-24184f2ebca6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.376263 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nrqr\" (UniqueName: \"kubernetes.io/projected/ae86befe-47e5-4645-b432-24184f2ebca6-kube-api-access-7nrqr\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9\" (UID: \"ae86befe-47e5-4645-b432-24184f2ebca6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.381632 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae86befe-47e5-4645-b432-24184f2ebca6-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9\" (UID: \"ae86befe-47e5-4645-b432-24184f2ebca6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.397738 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae86befe-47e5-4645-b432-24184f2ebca6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9\" (UID: \"ae86befe-47e5-4645-b432-24184f2ebca6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.398299 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nrqr\" (UniqueName: \"kubernetes.io/projected/ae86befe-47e5-4645-b432-24184f2ebca6-kube-api-access-7nrqr\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9\" (UID: \"ae86befe-47e5-4645-b432-24184f2ebca6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9" Jan 23 07:05:34 crc kubenswrapper[4937]: I0123 07:05:34.469275 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9" Jan 23 07:05:35 crc kubenswrapper[4937]: I0123 07:05:35.057187 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9"] Jan 23 07:05:35 crc kubenswrapper[4937]: I0123 07:05:35.067706 4937 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 07:05:36 crc kubenswrapper[4937]: I0123 07:05:36.080164 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9" event={"ID":"ae86befe-47e5-4645-b432-24184f2ebca6","Type":"ContainerStarted","Data":"1e2309540c76067aec844e0cd0d79f9d30d60c0f750e202b1c2920dc21d220c1"} Jan 23 07:05:36 crc kubenswrapper[4937]: I0123 07:05:36.080672 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9" event={"ID":"ae86befe-47e5-4645-b432-24184f2ebca6","Type":"ContainerStarted","Data":"750fe4fe646288255e0eec0ffaa80fceb32ab0bdb07703164001323365fef732"} Jan 23 07:05:36 crc kubenswrapper[4937]: I0123 07:05:36.109202 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9" podStartSLOduration=1.602248591 podStartE2EDuration="2.109170977s" podCreationTimestamp="2026-01-23 07:05:34 +0000 UTC" firstStartedPulling="2026-01-23 07:05:35.067435408 +0000 UTC m=+1934.871202071" lastFinishedPulling="2026-01-23 07:05:35.574357794 +0000 UTC m=+1935.378124457" observedRunningTime="2026-01-23 07:05:36.091481909 +0000 UTC m=+1935.895248572" watchObservedRunningTime="2026-01-23 07:05:36.109170977 +0000 UTC m=+1935.912937770" Jan 23 07:05:40 crc kubenswrapper[4937]: I0123 07:05:40.032380 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-gkrpw"] Jan 23 07:05:40 crc kubenswrapper[4937]: I0123 07:05:40.050666 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-gkrpw"] Jan 23 07:05:40 crc kubenswrapper[4937]: I0123 07:05:40.539388 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:05:40 crc kubenswrapper[4937]: E0123 07:05:40.539797 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:05:40 crc kubenswrapper[4937]: I0123 07:05:40.540445 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47488e25-41b9-46e1-8ad7-5cfe2c4654c7" path="/var/lib/kubelet/pods/47488e25-41b9-46e1-8ad7-5cfe2c4654c7/volumes" Jan 23 07:05:43 crc kubenswrapper[4937]: I0123 07:05:43.030652 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-zjhvz"] Jan 23 07:05:43 crc kubenswrapper[4937]: I0123 07:05:43.039286 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-zjhvz"] Jan 23 07:05:44 crc kubenswrapper[4937]: I0123 07:05:44.043016 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-xgx9n"] Jan 23 07:05:44 crc kubenswrapper[4937]: I0123 07:05:44.062574 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-xgx9n"] Jan 23 07:05:44 crc kubenswrapper[4937]: I0123 07:05:44.539950 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2537508-9450-4931-b4a4-d87cfdaa4a77" path="/var/lib/kubelet/pods/f2537508-9450-4931-b4a4-d87cfdaa4a77/volumes" Jan 23 07:05:44 crc kubenswrapper[4937]: I0123 07:05:44.540696 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f77dc563-57f2-4c47-a627-98d15343173b" path="/var/lib/kubelet/pods/f77dc563-57f2-4c47-a627-98d15343173b/volumes" Jan 23 07:05:54 crc kubenswrapper[4937]: I0123 07:05:54.526218 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:05:54 crc kubenswrapper[4937]: E0123 07:05:54.527038 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:06:00 crc kubenswrapper[4937]: I0123 07:06:00.055455 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-2n4zb"] Jan 23 07:06:00 crc kubenswrapper[4937]: I0123 07:06:00.063820 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-2n4zb"] Jan 23 07:06:00 crc kubenswrapper[4937]: I0123 07:06:00.541793 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e02394c2-2975-4589-9670-7c69fa89cb1d" path="/var/lib/kubelet/pods/e02394c2-2975-4589-9670-7c69fa89cb1d/volumes" Jan 23 07:06:06 crc kubenswrapper[4937]: I0123 07:06:06.009534 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mffnz"] Jan 23 07:06:06 crc kubenswrapper[4937]: I0123 07:06:06.013387 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mffnz" Jan 23 07:06:06 crc kubenswrapper[4937]: I0123 07:06:06.036463 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mffnz"] Jan 23 07:06:06 crc kubenswrapper[4937]: I0123 07:06:06.058971 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ed120e-f12a-4867-8d21-7c9630711331-utilities\") pod \"certified-operators-mffnz\" (UID: \"10ed120e-f12a-4867-8d21-7c9630711331\") " pod="openshift-marketplace/certified-operators-mffnz" Jan 23 07:06:06 crc kubenswrapper[4937]: I0123 07:06:06.059173 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft6mb\" (UniqueName: \"kubernetes.io/projected/10ed120e-f12a-4867-8d21-7c9630711331-kube-api-access-ft6mb\") pod \"certified-operators-mffnz\" (UID: \"10ed120e-f12a-4867-8d21-7c9630711331\") " pod="openshift-marketplace/certified-operators-mffnz" Jan 23 07:06:06 crc kubenswrapper[4937]: I0123 07:06:06.059217 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ed120e-f12a-4867-8d21-7c9630711331-catalog-content\") pod \"certified-operators-mffnz\" (UID: \"10ed120e-f12a-4867-8d21-7c9630711331\") " pod="openshift-marketplace/certified-operators-mffnz" Jan 23 07:06:06 crc kubenswrapper[4937]: I0123 07:06:06.160297 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft6mb\" (UniqueName: \"kubernetes.io/projected/10ed120e-f12a-4867-8d21-7c9630711331-kube-api-access-ft6mb\") pod \"certified-operators-mffnz\" (UID: \"10ed120e-f12a-4867-8d21-7c9630711331\") " pod="openshift-marketplace/certified-operators-mffnz" Jan 23 07:06:06 crc kubenswrapper[4937]: I0123 07:06:06.160337 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ed120e-f12a-4867-8d21-7c9630711331-catalog-content\") pod \"certified-operators-mffnz\" (UID: \"10ed120e-f12a-4867-8d21-7c9630711331\") " pod="openshift-marketplace/certified-operators-mffnz" Jan 23 07:06:06 crc kubenswrapper[4937]: I0123 07:06:06.160417 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ed120e-f12a-4867-8d21-7c9630711331-utilities\") pod \"certified-operators-mffnz\" (UID: \"10ed120e-f12a-4867-8d21-7c9630711331\") " pod="openshift-marketplace/certified-operators-mffnz" Jan 23 07:06:06 crc kubenswrapper[4937]: I0123 07:06:06.160953 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ed120e-f12a-4867-8d21-7c9630711331-catalog-content\") pod \"certified-operators-mffnz\" (UID: \"10ed120e-f12a-4867-8d21-7c9630711331\") " pod="openshift-marketplace/certified-operators-mffnz" Jan 23 07:06:06 crc kubenswrapper[4937]: I0123 07:06:06.161139 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ed120e-f12a-4867-8d21-7c9630711331-utilities\") pod \"certified-operators-mffnz\" (UID: \"10ed120e-f12a-4867-8d21-7c9630711331\") " pod="openshift-marketplace/certified-operators-mffnz" Jan 23 07:06:06 crc kubenswrapper[4937]: I0123 07:06:06.187484 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft6mb\" (UniqueName: \"kubernetes.io/projected/10ed120e-f12a-4867-8d21-7c9630711331-kube-api-access-ft6mb\") pod \"certified-operators-mffnz\" (UID: \"10ed120e-f12a-4867-8d21-7c9630711331\") " pod="openshift-marketplace/certified-operators-mffnz" Jan 23 07:06:06 crc kubenswrapper[4937]: I0123 07:06:06.340566 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mffnz" Jan 23 07:06:06 crc kubenswrapper[4937]: I0123 07:06:06.814231 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mffnz"] Jan 23 07:06:07 crc kubenswrapper[4937]: I0123 07:06:07.159135 4937 scope.go:117] "RemoveContainer" containerID="1f2ee790343d87ce9795030e705dbac913df0eea1e8b8d884c2493899ec3767f" Jan 23 07:06:07 crc kubenswrapper[4937]: I0123 07:06:07.197165 4937 scope.go:117] "RemoveContainer" containerID="638c4f31a5c7354cb1c84a5c4b249b40b38566925f097f3356229dd4081fa8bf" Jan 23 07:06:07 crc kubenswrapper[4937]: I0123 07:06:07.278068 4937 scope.go:117] "RemoveContainer" containerID="76b5749b3524d08f65f214cc3f03f07a43470a7421af911d91d8855745305eb1" Jan 23 07:06:07 crc kubenswrapper[4937]: I0123 07:06:07.321298 4937 scope.go:117] "RemoveContainer" containerID="e06a533c0042274339700dc5a47431b2f7b27a413ba1ccbe65d7aac77de23625" Jan 23 07:06:07 crc kubenswrapper[4937]: I0123 07:06:07.370473 4937 scope.go:117] "RemoveContainer" containerID="10c44657af758b0048b7e631a0b8133f00322859c0a18f99c4a5fa8bcd7c1f3f" Jan 23 07:06:07 crc kubenswrapper[4937]: I0123 07:06:07.400192 4937 generic.go:334] "Generic (PLEG): container finished" podID="10ed120e-f12a-4867-8d21-7c9630711331" containerID="b2f7e8a3a9c4fe40d859bfbcae758e1af622d65591809224a2dd05f6a4e89461" exitCode=0 Jan 23 07:06:07 crc kubenswrapper[4937]: I0123 07:06:07.400244 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mffnz" event={"ID":"10ed120e-f12a-4867-8d21-7c9630711331","Type":"ContainerDied","Data":"b2f7e8a3a9c4fe40d859bfbcae758e1af622d65591809224a2dd05f6a4e89461"} Jan 23 07:06:07 crc kubenswrapper[4937]: I0123 07:06:07.400273 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mffnz" event={"ID":"10ed120e-f12a-4867-8d21-7c9630711331","Type":"ContainerStarted","Data":"be3051ada1db767f4e5f18a0e26594565083417f8885a5ccc09b2597eaaa7a98"} Jan 23 07:06:07 crc kubenswrapper[4937]: I0123 07:06:07.449666 4937 scope.go:117] "RemoveContainer" containerID="80443c55ed14927bc9a1cbd967a29c2f5b3f944226efa99da34d17629e5d73ce" Jan 23 07:06:07 crc kubenswrapper[4937]: I0123 07:06:07.482935 4937 scope.go:117] "RemoveContainer" containerID="1a63be59397fb453f40877ce49a3d597cdad81a46ed53136ff3711fa90499d36" Jan 23 07:06:07 crc kubenswrapper[4937]: I0123 07:06:07.526153 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:06:07 crc kubenswrapper[4937]: E0123 07:06:07.526507 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:06:08 crc kubenswrapper[4937]: I0123 07:06:08.415568 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mffnz" event={"ID":"10ed120e-f12a-4867-8d21-7c9630711331","Type":"ContainerStarted","Data":"ce1a52dd8004641607784b63f3a5b518c568f3867a897db5e3eb08590b7367ec"} Jan 23 07:06:09 crc kubenswrapper[4937]: I0123 07:06:09.425038 4937 generic.go:334] "Generic (PLEG): container finished" podID="10ed120e-f12a-4867-8d21-7c9630711331" containerID="ce1a52dd8004641607784b63f3a5b518c568f3867a897db5e3eb08590b7367ec" exitCode=0 Jan 23 07:06:09 crc kubenswrapper[4937]: I0123 07:06:09.425084 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mffnz" event={"ID":"10ed120e-f12a-4867-8d21-7c9630711331","Type":"ContainerDied","Data":"ce1a52dd8004641607784b63f3a5b518c568f3867a897db5e3eb08590b7367ec"} Jan 23 07:06:10 crc kubenswrapper[4937]: I0123 07:06:10.040509 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-jh76z"] Jan 23 07:06:10 crc kubenswrapper[4937]: I0123 07:06:10.058868 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-634e-account-create-update-px24p"] Jan 23 07:06:10 crc kubenswrapper[4937]: I0123 07:06:10.073483 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-jh76z"] Jan 23 07:06:10 crc kubenswrapper[4937]: I0123 07:06:10.085313 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-634e-account-create-update-px24p"] Jan 23 07:06:10 crc kubenswrapper[4937]: I0123 07:06:10.435577 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mffnz" event={"ID":"10ed120e-f12a-4867-8d21-7c9630711331","Type":"ContainerStarted","Data":"b623eccf50a3b6d4a883feea434c3d70f14b9cedbd670388f76f8bec5231be6e"} Jan 23 07:06:10 crc kubenswrapper[4937]: I0123 07:06:10.457144 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mffnz" podStartSLOduration=3.046138579 podStartE2EDuration="5.457128931s" podCreationTimestamp="2026-01-23 07:06:05 +0000 UTC" firstStartedPulling="2026-01-23 07:06:07.404847131 +0000 UTC m=+1967.208613784" lastFinishedPulling="2026-01-23 07:06:09.815837473 +0000 UTC m=+1969.619604136" observedRunningTime="2026-01-23 07:06:10.455421495 +0000 UTC m=+1970.259188148" watchObservedRunningTime="2026-01-23 07:06:10.457128931 +0000 UTC m=+1970.260895584" Jan 23 07:06:10 crc kubenswrapper[4937]: I0123 07:06:10.539671 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa" path="/var/lib/kubelet/pods/c8b5fe79-161f-45d6-8d96-0e6ff27e2dfa/volumes" Jan 23 07:06:10 crc kubenswrapper[4937]: I0123 07:06:10.540534 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e83400c9-5cb0-40c8-907b-92e840794f92" path="/var/lib/kubelet/pods/e83400c9-5cb0-40c8-907b-92e840794f92/volumes" Jan 23 07:06:11 crc kubenswrapper[4937]: I0123 07:06:11.030452 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ab28-account-create-update-vqsqw"] Jan 23 07:06:11 crc kubenswrapper[4937]: I0123 07:06:11.039470 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-6157-account-create-update-8fwxn"] Jan 23 07:06:11 crc kubenswrapper[4937]: I0123 07:06:11.052741 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-vg7tb"] Jan 23 07:06:11 crc kubenswrapper[4937]: I0123 07:06:11.062310 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-827z7"] Jan 23 07:06:11 crc kubenswrapper[4937]: I0123 07:06:11.076644 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ab28-account-create-update-vqsqw"] Jan 23 07:06:11 crc kubenswrapper[4937]: I0123 07:06:11.085007 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-vg7tb"] Jan 23 07:06:11 crc kubenswrapper[4937]: I0123 07:06:11.093859 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-827z7"] Jan 23 07:06:11 crc kubenswrapper[4937]: I0123 07:06:11.102241 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-6157-account-create-update-8fwxn"] Jan 23 07:06:12 crc kubenswrapper[4937]: I0123 07:06:12.541422 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be" path="/var/lib/kubelet/pods/6dd77c35-b6f2-4e71-a6a9-ca7998a8f6be/volumes" Jan 23 07:06:12 crc kubenswrapper[4937]: I0123 07:06:12.542441 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93b93d08-1c46-4589-bb41-3a6353b03d7f" path="/var/lib/kubelet/pods/93b93d08-1c46-4589-bb41-3a6353b03d7f/volumes" Jan 23 07:06:12 crc kubenswrapper[4937]: I0123 07:06:12.543276 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a931af15-27d9-46f8-ab92-0af812ed5cd5" path="/var/lib/kubelet/pods/a931af15-27d9-46f8-ab92-0af812ed5cd5/volumes" Jan 23 07:06:12 crc kubenswrapper[4937]: I0123 07:06:12.544207 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da4d3482-c2df-46d9-88ad-d8199d012375" path="/var/lib/kubelet/pods/da4d3482-c2df-46d9-88ad-d8199d012375/volumes" Jan 23 07:06:16 crc kubenswrapper[4937]: I0123 07:06:16.340972 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mffnz" Jan 23 07:06:16 crc kubenswrapper[4937]: I0123 07:06:16.341879 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mffnz" Jan 23 07:06:16 crc kubenswrapper[4937]: I0123 07:06:16.392081 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mffnz" Jan 23 07:06:16 crc kubenswrapper[4937]: I0123 07:06:16.612017 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mffnz" Jan 23 07:06:16 crc kubenswrapper[4937]: I0123 07:06:16.683959 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mffnz"] Jan 23 07:06:18 crc kubenswrapper[4937]: I0123 07:06:18.510625 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mffnz" podUID="10ed120e-f12a-4867-8d21-7c9630711331" containerName="registry-server" containerID="cri-o://b623eccf50a3b6d4a883feea434c3d70f14b9cedbd670388f76f8bec5231be6e" gracePeriod=2 Jan 23 07:06:18 crc kubenswrapper[4937]: I0123 07:06:18.987981 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mffnz" Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.141257 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ed120e-f12a-4867-8d21-7c9630711331-utilities\") pod \"10ed120e-f12a-4867-8d21-7c9630711331\" (UID: \"10ed120e-f12a-4867-8d21-7c9630711331\") " Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.141470 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft6mb\" (UniqueName: \"kubernetes.io/projected/10ed120e-f12a-4867-8d21-7c9630711331-kube-api-access-ft6mb\") pod \"10ed120e-f12a-4867-8d21-7c9630711331\" (UID: \"10ed120e-f12a-4867-8d21-7c9630711331\") " Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.141660 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ed120e-f12a-4867-8d21-7c9630711331-catalog-content\") pod \"10ed120e-f12a-4867-8d21-7c9630711331\" (UID: \"10ed120e-f12a-4867-8d21-7c9630711331\") " Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.143225 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10ed120e-f12a-4867-8d21-7c9630711331-utilities" (OuterVolumeSpecName: "utilities") pod "10ed120e-f12a-4867-8d21-7c9630711331" (UID: "10ed120e-f12a-4867-8d21-7c9630711331"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.148006 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10ed120e-f12a-4867-8d21-7c9630711331-kube-api-access-ft6mb" (OuterVolumeSpecName: "kube-api-access-ft6mb") pod "10ed120e-f12a-4867-8d21-7c9630711331" (UID: "10ed120e-f12a-4867-8d21-7c9630711331"). InnerVolumeSpecName "kube-api-access-ft6mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.201085 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10ed120e-f12a-4867-8d21-7c9630711331-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10ed120e-f12a-4867-8d21-7c9630711331" (UID: "10ed120e-f12a-4867-8d21-7c9630711331"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.243682 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft6mb\" (UniqueName: \"kubernetes.io/projected/10ed120e-f12a-4867-8d21-7c9630711331-kube-api-access-ft6mb\") on node \"crc\" DevicePath \"\"" Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.243720 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10ed120e-f12a-4867-8d21-7c9630711331-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.243734 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10ed120e-f12a-4867-8d21-7c9630711331-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.523749 4937 generic.go:334] "Generic (PLEG): container finished" podID="10ed120e-f12a-4867-8d21-7c9630711331" containerID="b623eccf50a3b6d4a883feea434c3d70f14b9cedbd670388f76f8bec5231be6e" exitCode=0 Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.523805 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mffnz" Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.523823 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mffnz" event={"ID":"10ed120e-f12a-4867-8d21-7c9630711331","Type":"ContainerDied","Data":"b623eccf50a3b6d4a883feea434c3d70f14b9cedbd670388f76f8bec5231be6e"} Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.524184 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mffnz" event={"ID":"10ed120e-f12a-4867-8d21-7c9630711331","Type":"ContainerDied","Data":"be3051ada1db767f4e5f18a0e26594565083417f8885a5ccc09b2597eaaa7a98"} Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.524212 4937 scope.go:117] "RemoveContainer" containerID="b623eccf50a3b6d4a883feea434c3d70f14b9cedbd670388f76f8bec5231be6e" Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.526129 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:06:19 crc kubenswrapper[4937]: E0123 07:06:19.526576 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.553304 4937 scope.go:117] "RemoveContainer" containerID="ce1a52dd8004641607784b63f3a5b518c568f3867a897db5e3eb08590b7367ec" Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.562561 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mffnz"] Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.571448 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mffnz"] Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.587447 4937 scope.go:117] "RemoveContainer" containerID="b2f7e8a3a9c4fe40d859bfbcae758e1af622d65591809224a2dd05f6a4e89461" Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.625459 4937 scope.go:117] "RemoveContainer" containerID="b623eccf50a3b6d4a883feea434c3d70f14b9cedbd670388f76f8bec5231be6e" Jan 23 07:06:19 crc kubenswrapper[4937]: E0123 07:06:19.625927 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b623eccf50a3b6d4a883feea434c3d70f14b9cedbd670388f76f8bec5231be6e\": container with ID starting with b623eccf50a3b6d4a883feea434c3d70f14b9cedbd670388f76f8bec5231be6e not found: ID does not exist" containerID="b623eccf50a3b6d4a883feea434c3d70f14b9cedbd670388f76f8bec5231be6e" Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.625967 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b623eccf50a3b6d4a883feea434c3d70f14b9cedbd670388f76f8bec5231be6e"} err="failed to get container status \"b623eccf50a3b6d4a883feea434c3d70f14b9cedbd670388f76f8bec5231be6e\": rpc error: code = NotFound desc = could not find container \"b623eccf50a3b6d4a883feea434c3d70f14b9cedbd670388f76f8bec5231be6e\": container with ID starting with b623eccf50a3b6d4a883feea434c3d70f14b9cedbd670388f76f8bec5231be6e not found: ID does not exist" Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.625992 4937 scope.go:117] "RemoveContainer" containerID="ce1a52dd8004641607784b63f3a5b518c568f3867a897db5e3eb08590b7367ec" Jan 23 07:06:19 crc kubenswrapper[4937]: E0123 07:06:19.626380 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce1a52dd8004641607784b63f3a5b518c568f3867a897db5e3eb08590b7367ec\": container with ID starting with ce1a52dd8004641607784b63f3a5b518c568f3867a897db5e3eb08590b7367ec not found: ID does not exist" containerID="ce1a52dd8004641607784b63f3a5b518c568f3867a897db5e3eb08590b7367ec" Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.626412 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce1a52dd8004641607784b63f3a5b518c568f3867a897db5e3eb08590b7367ec"} err="failed to get container status \"ce1a52dd8004641607784b63f3a5b518c568f3867a897db5e3eb08590b7367ec\": rpc error: code = NotFound desc = could not find container \"ce1a52dd8004641607784b63f3a5b518c568f3867a897db5e3eb08590b7367ec\": container with ID starting with ce1a52dd8004641607784b63f3a5b518c568f3867a897db5e3eb08590b7367ec not found: ID does not exist" Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.626433 4937 scope.go:117] "RemoveContainer" containerID="b2f7e8a3a9c4fe40d859bfbcae758e1af622d65591809224a2dd05f6a4e89461" Jan 23 07:06:19 crc kubenswrapper[4937]: E0123 07:06:19.626816 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2f7e8a3a9c4fe40d859bfbcae758e1af622d65591809224a2dd05f6a4e89461\": container with ID starting with b2f7e8a3a9c4fe40d859bfbcae758e1af622d65591809224a2dd05f6a4e89461 not found: ID does not exist" containerID="b2f7e8a3a9c4fe40d859bfbcae758e1af622d65591809224a2dd05f6a4e89461" Jan 23 07:06:19 crc kubenswrapper[4937]: I0123 07:06:19.626930 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2f7e8a3a9c4fe40d859bfbcae758e1af622d65591809224a2dd05f6a4e89461"} err="failed to get container status \"b2f7e8a3a9c4fe40d859bfbcae758e1af622d65591809224a2dd05f6a4e89461\": rpc error: code = NotFound desc = could not find container \"b2f7e8a3a9c4fe40d859bfbcae758e1af622d65591809224a2dd05f6a4e89461\": container with ID starting with b2f7e8a3a9c4fe40d859bfbcae758e1af622d65591809224a2dd05f6a4e89461 not found: ID does not exist" Jan 23 07:06:20 crc kubenswrapper[4937]: I0123 07:06:20.547409 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10ed120e-f12a-4867-8d21-7c9630711331" path="/var/lib/kubelet/pods/10ed120e-f12a-4867-8d21-7c9630711331/volumes" Jan 23 07:06:30 crc kubenswrapper[4937]: I0123 07:06:30.533253 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:06:30 crc kubenswrapper[4937]: E0123 07:06:30.534116 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:06:45 crc kubenswrapper[4937]: I0123 07:06:45.526409 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:06:45 crc kubenswrapper[4937]: E0123 07:06:45.527191 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:06:52 crc kubenswrapper[4937]: I0123 07:06:52.041499 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-26qqm"] Jan 23 07:06:52 crc kubenswrapper[4937]: I0123 07:06:52.053310 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-26qqm"] Jan 23 07:06:52 crc kubenswrapper[4937]: I0123 07:06:52.541805 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3" path="/var/lib/kubelet/pods/e39f8ffb-bba6-4585-b7cd-cdc0d5e026d3/volumes" Jan 23 07:06:54 crc kubenswrapper[4937]: I0123 07:06:54.871187 4937 generic.go:334] "Generic (PLEG): container finished" podID="ae86befe-47e5-4645-b432-24184f2ebca6" containerID="1e2309540c76067aec844e0cd0d79f9d30d60c0f750e202b1c2920dc21d220c1" exitCode=0 Jan 23 07:06:54 crc kubenswrapper[4937]: I0123 07:06:54.871773 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9" event={"ID":"ae86befe-47e5-4645-b432-24184f2ebca6","Type":"ContainerDied","Data":"1e2309540c76067aec844e0cd0d79f9d30d60c0f750e202b1c2920dc21d220c1"} Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.349576 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.491303 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae86befe-47e5-4645-b432-24184f2ebca6-inventory\") pod \"ae86befe-47e5-4645-b432-24184f2ebca6\" (UID: \"ae86befe-47e5-4645-b432-24184f2ebca6\") " Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.491399 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae86befe-47e5-4645-b432-24184f2ebca6-ssh-key-openstack-edpm-ipam\") pod \"ae86befe-47e5-4645-b432-24184f2ebca6\" (UID: \"ae86befe-47e5-4645-b432-24184f2ebca6\") " Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.491558 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nrqr\" (UniqueName: \"kubernetes.io/projected/ae86befe-47e5-4645-b432-24184f2ebca6-kube-api-access-7nrqr\") pod \"ae86befe-47e5-4645-b432-24184f2ebca6\" (UID: \"ae86befe-47e5-4645-b432-24184f2ebca6\") " Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.503800 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae86befe-47e5-4645-b432-24184f2ebca6-kube-api-access-7nrqr" (OuterVolumeSpecName: "kube-api-access-7nrqr") pod "ae86befe-47e5-4645-b432-24184f2ebca6" (UID: "ae86befe-47e5-4645-b432-24184f2ebca6"). InnerVolumeSpecName "kube-api-access-7nrqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.528693 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae86befe-47e5-4645-b432-24184f2ebca6-inventory" (OuterVolumeSpecName: "inventory") pod "ae86befe-47e5-4645-b432-24184f2ebca6" (UID: "ae86befe-47e5-4645-b432-24184f2ebca6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.533062 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae86befe-47e5-4645-b432-24184f2ebca6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ae86befe-47e5-4645-b432-24184f2ebca6" (UID: "ae86befe-47e5-4645-b432-24184f2ebca6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.593460 4937 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae86befe-47e5-4645-b432-24184f2ebca6-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.593492 4937 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae86befe-47e5-4645-b432-24184f2ebca6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.593503 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nrqr\" (UniqueName: \"kubernetes.io/projected/ae86befe-47e5-4645-b432-24184f2ebca6-kube-api-access-7nrqr\") on node \"crc\" DevicePath \"\"" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.888523 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9" event={"ID":"ae86befe-47e5-4645-b432-24184f2ebca6","Type":"ContainerDied","Data":"750fe4fe646288255e0eec0ffaa80fceb32ab0bdb07703164001323365fef732"} Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.888568 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.888573 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="750fe4fe646288255e0eec0ffaa80fceb32ab0bdb07703164001323365fef732" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.984476 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb"] Jan 23 07:06:56 crc kubenswrapper[4937]: E0123 07:06:56.984975 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10ed120e-f12a-4867-8d21-7c9630711331" containerName="registry-server" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.984999 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ed120e-f12a-4867-8d21-7c9630711331" containerName="registry-server" Jan 23 07:06:56 crc kubenswrapper[4937]: E0123 07:06:56.985019 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae86befe-47e5-4645-b432-24184f2ebca6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.985030 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae86befe-47e5-4645-b432-24184f2ebca6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 07:06:56 crc kubenswrapper[4937]: E0123 07:06:56.985045 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10ed120e-f12a-4867-8d21-7c9630711331" containerName="extract-content" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.985053 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ed120e-f12a-4867-8d21-7c9630711331" containerName="extract-content" Jan 23 07:06:56 crc kubenswrapper[4937]: E0123 07:06:56.985089 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10ed120e-f12a-4867-8d21-7c9630711331" containerName="extract-utilities" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.985097 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ed120e-f12a-4867-8d21-7c9630711331" containerName="extract-utilities" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.985345 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae86befe-47e5-4645-b432-24184f2ebca6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.985370 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="10ed120e-f12a-4867-8d21-7c9630711331" containerName="registry-server" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.986270 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.989256 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.989654 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rm45q" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.989731 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 07:06:56 crc kubenswrapper[4937]: I0123 07:06:56.990005 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 07:06:57 crc kubenswrapper[4937]: I0123 07:06:57.000106 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb"] Jan 23 07:06:57 crc kubenswrapper[4937]: I0123 07:06:57.103025 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54v29\" (UniqueName: \"kubernetes.io/projected/68285833-1fa7-453c-a35e-197efebf176e-kube-api-access-54v29\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb\" (UID: \"68285833-1fa7-453c-a35e-197efebf176e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb" Jan 23 07:06:57 crc kubenswrapper[4937]: I0123 07:06:57.103115 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68285833-1fa7-453c-a35e-197efebf176e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb\" (UID: \"68285833-1fa7-453c-a35e-197efebf176e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb" Jan 23 07:06:57 crc kubenswrapper[4937]: I0123 07:06:57.103287 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68285833-1fa7-453c-a35e-197efebf176e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb\" (UID: \"68285833-1fa7-453c-a35e-197efebf176e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb" Jan 23 07:06:57 crc kubenswrapper[4937]: I0123 07:06:57.204710 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54v29\" (UniqueName: \"kubernetes.io/projected/68285833-1fa7-453c-a35e-197efebf176e-kube-api-access-54v29\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb\" (UID: \"68285833-1fa7-453c-a35e-197efebf176e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb" Jan 23 07:06:57 crc kubenswrapper[4937]: I0123 07:06:57.205038 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68285833-1fa7-453c-a35e-197efebf176e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb\" (UID: \"68285833-1fa7-453c-a35e-197efebf176e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb" Jan 23 07:06:57 crc kubenswrapper[4937]: I0123 07:06:57.205185 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68285833-1fa7-453c-a35e-197efebf176e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb\" (UID: \"68285833-1fa7-453c-a35e-197efebf176e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb" Jan 23 07:06:57 crc kubenswrapper[4937]: I0123 07:06:57.210272 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68285833-1fa7-453c-a35e-197efebf176e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb\" (UID: \"68285833-1fa7-453c-a35e-197efebf176e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb" Jan 23 07:06:57 crc kubenswrapper[4937]: I0123 07:06:57.210292 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68285833-1fa7-453c-a35e-197efebf176e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb\" (UID: \"68285833-1fa7-453c-a35e-197efebf176e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb" Jan 23 07:06:57 crc kubenswrapper[4937]: I0123 07:06:57.222514 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54v29\" (UniqueName: \"kubernetes.io/projected/68285833-1fa7-453c-a35e-197efebf176e-kube-api-access-54v29\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb\" (UID: \"68285833-1fa7-453c-a35e-197efebf176e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb" Jan 23 07:06:57 crc kubenswrapper[4937]: I0123 07:06:57.311634 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb" Jan 23 07:06:57 crc kubenswrapper[4937]: I0123 07:06:57.856084 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb"] Jan 23 07:06:57 crc kubenswrapper[4937]: I0123 07:06:57.898862 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb" event={"ID":"68285833-1fa7-453c-a35e-197efebf176e","Type":"ContainerStarted","Data":"b9ad21ffd19eb21aa088893ceef9d8c3bd8efc45b0fc89b8d1f29012e16bd84c"} Jan 23 07:06:58 crc kubenswrapper[4937]: I0123 07:06:58.527167 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:06:58 crc kubenswrapper[4937]: E0123 07:06:58.527503 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:06:58 crc kubenswrapper[4937]: I0123 07:06:58.911384 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb" event={"ID":"68285833-1fa7-453c-a35e-197efebf176e","Type":"ContainerStarted","Data":"9aac9b8655e5ffb6cc037796d07c9d682513100c86a34690ad5d0c0094d37330"} Jan 23 07:06:58 crc kubenswrapper[4937]: I0123 07:06:58.930630 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb" podStartSLOduration=2.240099606 podStartE2EDuration="2.930609887s" podCreationTimestamp="2026-01-23 07:06:56 +0000 UTC" firstStartedPulling="2026-01-23 07:06:57.87718141 +0000 UTC m=+2017.680948063" lastFinishedPulling="2026-01-23 07:06:58.567691681 +0000 UTC m=+2018.371458344" observedRunningTime="2026-01-23 07:06:58.927496923 +0000 UTC m=+2018.731263576" watchObservedRunningTime="2026-01-23 07:06:58.930609887 +0000 UTC m=+2018.734376540" Jan 23 07:07:03 crc kubenswrapper[4937]: I0123 07:07:03.962769 4937 generic.go:334] "Generic (PLEG): container finished" podID="68285833-1fa7-453c-a35e-197efebf176e" containerID="9aac9b8655e5ffb6cc037796d07c9d682513100c86a34690ad5d0c0094d37330" exitCode=0 Jan 23 07:07:03 crc kubenswrapper[4937]: I0123 07:07:03.962854 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb" event={"ID":"68285833-1fa7-453c-a35e-197efebf176e","Type":"ContainerDied","Data":"9aac9b8655e5ffb6cc037796d07c9d682513100c86a34690ad5d0c0094d37330"} Jan 23 07:07:05 crc kubenswrapper[4937]: I0123 07:07:05.372908 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb" Jan 23 07:07:05 crc kubenswrapper[4937]: I0123 07:07:05.479398 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68285833-1fa7-453c-a35e-197efebf176e-ssh-key-openstack-edpm-ipam\") pod \"68285833-1fa7-453c-a35e-197efebf176e\" (UID: \"68285833-1fa7-453c-a35e-197efebf176e\") " Jan 23 07:07:05 crc kubenswrapper[4937]: I0123 07:07:05.479727 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68285833-1fa7-453c-a35e-197efebf176e-inventory\") pod \"68285833-1fa7-453c-a35e-197efebf176e\" (UID: \"68285833-1fa7-453c-a35e-197efebf176e\") " Jan 23 07:07:05 crc kubenswrapper[4937]: I0123 07:07:05.479768 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54v29\" (UniqueName: \"kubernetes.io/projected/68285833-1fa7-453c-a35e-197efebf176e-kube-api-access-54v29\") pod \"68285833-1fa7-453c-a35e-197efebf176e\" (UID: \"68285833-1fa7-453c-a35e-197efebf176e\") " Jan 23 07:07:05 crc kubenswrapper[4937]: I0123 07:07:05.485467 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68285833-1fa7-453c-a35e-197efebf176e-kube-api-access-54v29" (OuterVolumeSpecName: "kube-api-access-54v29") pod "68285833-1fa7-453c-a35e-197efebf176e" (UID: "68285833-1fa7-453c-a35e-197efebf176e"). InnerVolumeSpecName "kube-api-access-54v29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:07:05 crc kubenswrapper[4937]: I0123 07:07:05.511904 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68285833-1fa7-453c-a35e-197efebf176e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "68285833-1fa7-453c-a35e-197efebf176e" (UID: "68285833-1fa7-453c-a35e-197efebf176e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:07:05 crc kubenswrapper[4937]: I0123 07:07:05.512392 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68285833-1fa7-453c-a35e-197efebf176e-inventory" (OuterVolumeSpecName: "inventory") pod "68285833-1fa7-453c-a35e-197efebf176e" (UID: "68285833-1fa7-453c-a35e-197efebf176e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:07:05 crc kubenswrapper[4937]: I0123 07:07:05.583858 4937 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68285833-1fa7-453c-a35e-197efebf176e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:07:05 crc kubenswrapper[4937]: I0123 07:07:05.583903 4937 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68285833-1fa7-453c-a35e-197efebf176e-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 07:07:05 crc kubenswrapper[4937]: I0123 07:07:05.583919 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54v29\" (UniqueName: \"kubernetes.io/projected/68285833-1fa7-453c-a35e-197efebf176e-kube-api-access-54v29\") on node \"crc\" DevicePath \"\"" Jan 23 07:07:05 crc kubenswrapper[4937]: I0123 07:07:05.981136 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb" event={"ID":"68285833-1fa7-453c-a35e-197efebf176e","Type":"ContainerDied","Data":"b9ad21ffd19eb21aa088893ceef9d8c3bd8efc45b0fc89b8d1f29012e16bd84c"} Jan 23 07:07:05 crc kubenswrapper[4937]: I0123 07:07:05.981191 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9ad21ffd19eb21aa088893ceef9d8c3bd8efc45b0fc89b8d1f29012e16bd84c" Jan 23 07:07:05 crc kubenswrapper[4937]: I0123 07:07:05.981232 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb" Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.070619 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f"] Jan 23 07:07:06 crc kubenswrapper[4937]: E0123 07:07:06.071015 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68285833-1fa7-453c-a35e-197efebf176e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.071037 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="68285833-1fa7-453c-a35e-197efebf176e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.071233 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="68285833-1fa7-453c-a35e-197efebf176e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.071932 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f" Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.075081 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.075278 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.075427 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rm45q" Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.075080 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.083901 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f"] Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.196928 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hc8q\" (UniqueName: \"kubernetes.io/projected/d79d9a1c-a285-4b25-a57e-fbb3d462d65e-kube-api-access-9hc8q\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qt84f\" (UID: \"d79d9a1c-a285-4b25-a57e-fbb3d462d65e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f" Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.196977 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d79d9a1c-a285-4b25-a57e-fbb3d462d65e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qt84f\" (UID: \"d79d9a1c-a285-4b25-a57e-fbb3d462d65e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f" Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.197003 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d79d9a1c-a285-4b25-a57e-fbb3d462d65e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qt84f\" (UID: \"d79d9a1c-a285-4b25-a57e-fbb3d462d65e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f" Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.299296 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hc8q\" (UniqueName: \"kubernetes.io/projected/d79d9a1c-a285-4b25-a57e-fbb3d462d65e-kube-api-access-9hc8q\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qt84f\" (UID: \"d79d9a1c-a285-4b25-a57e-fbb3d462d65e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f" Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.299366 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d79d9a1c-a285-4b25-a57e-fbb3d462d65e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qt84f\" (UID: \"d79d9a1c-a285-4b25-a57e-fbb3d462d65e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f" Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.299396 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d79d9a1c-a285-4b25-a57e-fbb3d462d65e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qt84f\" (UID: \"d79d9a1c-a285-4b25-a57e-fbb3d462d65e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f" Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.303305 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d79d9a1c-a285-4b25-a57e-fbb3d462d65e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qt84f\" (UID: \"d79d9a1c-a285-4b25-a57e-fbb3d462d65e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f" Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.303694 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d79d9a1c-a285-4b25-a57e-fbb3d462d65e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qt84f\" (UID: \"d79d9a1c-a285-4b25-a57e-fbb3d462d65e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f" Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.318609 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hc8q\" (UniqueName: \"kubernetes.io/projected/d79d9a1c-a285-4b25-a57e-fbb3d462d65e-kube-api-access-9hc8q\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qt84f\" (UID: \"d79d9a1c-a285-4b25-a57e-fbb3d462d65e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f" Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.397936 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f" Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.907682 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f"] Jan 23 07:07:06 crc kubenswrapper[4937]: I0123 07:07:06.991337 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f" event={"ID":"d79d9a1c-a285-4b25-a57e-fbb3d462d65e","Type":"ContainerStarted","Data":"8616dc978a4dae894a3f6bed11048bf78f72032d8d77bd48ad88458ced4d46a0"} Jan 23 07:07:07 crc kubenswrapper[4937]: I0123 07:07:07.611278 4937 scope.go:117] "RemoveContainer" containerID="4d3a577243c736f82ded9ab9ec82fadb8e5a0ec350e16d5c35a9e585c626e74e" Jan 23 07:07:07 crc kubenswrapper[4937]: I0123 07:07:07.645738 4937 scope.go:117] "RemoveContainer" containerID="ed63eebdd30ef78e36cb6e60448149282449eca0e96c6fdf74ff2fc8e73fc07a" Jan 23 07:07:07 crc kubenswrapper[4937]: I0123 07:07:07.675850 4937 scope.go:117] "RemoveContainer" containerID="296b6dea964b365c3d62f1a5fe36508c25fbb696c1820acfb7b44663ed37669b" Jan 23 07:07:07 crc kubenswrapper[4937]: I0123 07:07:07.718569 4937 scope.go:117] "RemoveContainer" containerID="2c132e615275b4476fc34c357601a223b7d9903f80491c44c8d2fc44737182b5" Jan 23 07:07:07 crc kubenswrapper[4937]: I0123 07:07:07.764280 4937 scope.go:117] "RemoveContainer" containerID="da55b988ef3e3f777e74f987057d7267952014001749ab4eba172f45c88978e7" Jan 23 07:07:07 crc kubenswrapper[4937]: I0123 07:07:07.805116 4937 scope.go:117] "RemoveContainer" containerID="8f08204ca761eb2391502cda0f85294e770dff724aeca5d61e6b6f004c590b18" Jan 23 07:07:07 crc kubenswrapper[4937]: I0123 07:07:07.824399 4937 scope.go:117] "RemoveContainer" containerID="2854dddda04b30ea0b28e1f7161d4e05cedc45b0ee2677a32377784d71dea149" Jan 23 07:07:08 crc kubenswrapper[4937]: I0123 07:07:08.003196 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f" event={"ID":"d79d9a1c-a285-4b25-a57e-fbb3d462d65e","Type":"ContainerStarted","Data":"f8f7ddba67a6179034065b0938b35c03664dd45268a486a12c62539869b99f12"} Jan 23 07:07:08 crc kubenswrapper[4937]: I0123 07:07:08.027887 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f" podStartSLOduration=1.5822075359999999 podStartE2EDuration="2.027867152s" podCreationTimestamp="2026-01-23 07:07:06 +0000 UTC" firstStartedPulling="2026-01-23 07:07:06.912985973 +0000 UTC m=+2026.716752626" lastFinishedPulling="2026-01-23 07:07:07.358645589 +0000 UTC m=+2027.162412242" observedRunningTime="2026-01-23 07:07:08.018124687 +0000 UTC m=+2027.821891350" watchObservedRunningTime="2026-01-23 07:07:08.027867152 +0000 UTC m=+2027.831633805" Jan 23 07:07:09 crc kubenswrapper[4937]: I0123 07:07:09.526788 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:07:10 crc kubenswrapper[4937]: I0123 07:07:10.025715 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"0522c574d47e4ac97cb11162b71392b0d5a0266310fe2f914c57e2ae4f267d43"} Jan 23 07:07:24 crc kubenswrapper[4937]: I0123 07:07:24.079627 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-z6jhd"] Jan 23 07:07:24 crc kubenswrapper[4937]: I0123 07:07:24.099436 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-z6jhd"] Jan 23 07:07:24 crc kubenswrapper[4937]: I0123 07:07:24.537876 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2bf80e7-424f-41b4-8baf-393a440d8c1c" path="/var/lib/kubelet/pods/f2bf80e7-424f-41b4-8baf-393a440d8c1c/volumes" Jan 23 07:07:28 crc kubenswrapper[4937]: I0123 07:07:28.039552 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q4wrm"] Jan 23 07:07:28 crc kubenswrapper[4937]: I0123 07:07:28.047547 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q4wrm"] Jan 23 07:07:28 crc kubenswrapper[4937]: I0123 07:07:28.542417 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7ea5f4b-dcae-4f04-b580-cc1e378d4073" path="/var/lib/kubelet/pods/b7ea5f4b-dcae-4f04-b580-cc1e378d4073/volumes" Jan 23 07:07:49 crc kubenswrapper[4937]: I0123 07:07:49.417784 4937 generic.go:334] "Generic (PLEG): container finished" podID="d79d9a1c-a285-4b25-a57e-fbb3d462d65e" containerID="f8f7ddba67a6179034065b0938b35c03664dd45268a486a12c62539869b99f12" exitCode=0 Jan 23 07:07:49 crc kubenswrapper[4937]: I0123 07:07:49.417842 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f" event={"ID":"d79d9a1c-a285-4b25-a57e-fbb3d462d65e","Type":"ContainerDied","Data":"f8f7ddba67a6179034065b0938b35c03664dd45268a486a12c62539869b99f12"} Jan 23 07:07:50 crc kubenswrapper[4937]: I0123 07:07:50.963244 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.042014 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d79d9a1c-a285-4b25-a57e-fbb3d462d65e-ssh-key-openstack-edpm-ipam\") pod \"d79d9a1c-a285-4b25-a57e-fbb3d462d65e\" (UID: \"d79d9a1c-a285-4b25-a57e-fbb3d462d65e\") " Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.042076 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hc8q\" (UniqueName: \"kubernetes.io/projected/d79d9a1c-a285-4b25-a57e-fbb3d462d65e-kube-api-access-9hc8q\") pod \"d79d9a1c-a285-4b25-a57e-fbb3d462d65e\" (UID: \"d79d9a1c-a285-4b25-a57e-fbb3d462d65e\") " Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.042213 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d79d9a1c-a285-4b25-a57e-fbb3d462d65e-inventory\") pod \"d79d9a1c-a285-4b25-a57e-fbb3d462d65e\" (UID: \"d79d9a1c-a285-4b25-a57e-fbb3d462d65e\") " Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.051848 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d79d9a1c-a285-4b25-a57e-fbb3d462d65e-kube-api-access-9hc8q" (OuterVolumeSpecName: "kube-api-access-9hc8q") pod "d79d9a1c-a285-4b25-a57e-fbb3d462d65e" (UID: "d79d9a1c-a285-4b25-a57e-fbb3d462d65e"). InnerVolumeSpecName "kube-api-access-9hc8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.070047 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d79d9a1c-a285-4b25-a57e-fbb3d462d65e-inventory" (OuterVolumeSpecName: "inventory") pod "d79d9a1c-a285-4b25-a57e-fbb3d462d65e" (UID: "d79d9a1c-a285-4b25-a57e-fbb3d462d65e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.070965 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d79d9a1c-a285-4b25-a57e-fbb3d462d65e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d79d9a1c-a285-4b25-a57e-fbb3d462d65e" (UID: "d79d9a1c-a285-4b25-a57e-fbb3d462d65e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.145583 4937 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d79d9a1c-a285-4b25-a57e-fbb3d462d65e-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.145664 4937 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d79d9a1c-a285-4b25-a57e-fbb3d462d65e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.145685 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hc8q\" (UniqueName: \"kubernetes.io/projected/d79d9a1c-a285-4b25-a57e-fbb3d462d65e-kube-api-access-9hc8q\") on node \"crc\" DevicePath \"\"" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.443402 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f" event={"ID":"d79d9a1c-a285-4b25-a57e-fbb3d462d65e","Type":"ContainerDied","Data":"8616dc978a4dae894a3f6bed11048bf78f72032d8d77bd48ad88458ced4d46a0"} Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.443446 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8616dc978a4dae894a3f6bed11048bf78f72032d8d77bd48ad88458ced4d46a0" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.443459 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qt84f" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.542298 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2"] Jan 23 07:07:51 crc kubenswrapper[4937]: E0123 07:07:51.542846 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d79d9a1c-a285-4b25-a57e-fbb3d462d65e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.542868 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d79d9a1c-a285-4b25-a57e-fbb3d462d65e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.543100 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="d79d9a1c-a285-4b25-a57e-fbb3d462d65e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.543976 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.546361 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.546385 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.546642 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.547919 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rm45q" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.552751 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2"] Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.659057 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ecb96e5-109f-44d6-8f2e-ea090aa26541-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45kn2\" (UID: \"9ecb96e5-109f-44d6-8f2e-ea090aa26541\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.659141 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ecb96e5-109f-44d6-8f2e-ea090aa26541-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45kn2\" (UID: \"9ecb96e5-109f-44d6-8f2e-ea090aa26541\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.659281 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcl9f\" (UniqueName: \"kubernetes.io/projected/9ecb96e5-109f-44d6-8f2e-ea090aa26541-kube-api-access-jcl9f\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45kn2\" (UID: \"9ecb96e5-109f-44d6-8f2e-ea090aa26541\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.761314 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ecb96e5-109f-44d6-8f2e-ea090aa26541-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45kn2\" (UID: \"9ecb96e5-109f-44d6-8f2e-ea090aa26541\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.761385 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ecb96e5-109f-44d6-8f2e-ea090aa26541-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45kn2\" (UID: \"9ecb96e5-109f-44d6-8f2e-ea090aa26541\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.761454 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcl9f\" (UniqueName: \"kubernetes.io/projected/9ecb96e5-109f-44d6-8f2e-ea090aa26541-kube-api-access-jcl9f\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45kn2\" (UID: \"9ecb96e5-109f-44d6-8f2e-ea090aa26541\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.766109 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ecb96e5-109f-44d6-8f2e-ea090aa26541-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45kn2\" (UID: \"9ecb96e5-109f-44d6-8f2e-ea090aa26541\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.766151 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ecb96e5-109f-44d6-8f2e-ea090aa26541-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45kn2\" (UID: \"9ecb96e5-109f-44d6-8f2e-ea090aa26541\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.786373 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcl9f\" (UniqueName: \"kubernetes.io/projected/9ecb96e5-109f-44d6-8f2e-ea090aa26541-kube-api-access-jcl9f\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-45kn2\" (UID: \"9ecb96e5-109f-44d6-8f2e-ea090aa26541\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2" Jan 23 07:07:51 crc kubenswrapper[4937]: I0123 07:07:51.865268 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2" Jan 23 07:07:52 crc kubenswrapper[4937]: I0123 07:07:52.438743 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2"] Jan 23 07:07:52 crc kubenswrapper[4937]: I0123 07:07:52.456080 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2" event={"ID":"9ecb96e5-109f-44d6-8f2e-ea090aa26541","Type":"ContainerStarted","Data":"af47faac3c6c603ca96558ccad25b9957637979be4be53fe8918a610288d6e38"} Jan 23 07:07:53 crc kubenswrapper[4937]: I0123 07:07:53.490256 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2" event={"ID":"9ecb96e5-109f-44d6-8f2e-ea090aa26541","Type":"ContainerStarted","Data":"8583af27b893f037adc054bdcbb334877e6ccd8428f1bc72573aee3b8c201caf"} Jan 23 07:07:53 crc kubenswrapper[4937]: I0123 07:07:53.509033 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2" podStartSLOduration=1.998536058 podStartE2EDuration="2.509009106s" podCreationTimestamp="2026-01-23 07:07:51 +0000 UTC" firstStartedPulling="2026-01-23 07:07:52.44685513 +0000 UTC m=+2072.250621793" lastFinishedPulling="2026-01-23 07:07:52.957328158 +0000 UTC m=+2072.761094841" observedRunningTime="2026-01-23 07:07:53.507276719 +0000 UTC m=+2073.311043382" watchObservedRunningTime="2026-01-23 07:07:53.509009106 +0000 UTC m=+2073.312775799" Jan 23 07:08:07 crc kubenswrapper[4937]: I0123 07:08:07.066107 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-wp8f4"] Jan 23 07:08:07 crc kubenswrapper[4937]: I0123 07:08:07.073878 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-wp8f4"] Jan 23 07:08:07 crc kubenswrapper[4937]: I0123 07:08:07.980493 4937 scope.go:117] "RemoveContainer" containerID="a752c7dc8111ec23ecdd255f9de261340376389ed6e07fdfc7b3481f0a1f4b50" Jan 23 07:08:08 crc kubenswrapper[4937]: I0123 07:08:08.015952 4937 scope.go:117] "RemoveContainer" containerID="b07b50b15265827e42ca199fe4f384c4b67dd85bf80e0d1c609946c1d0b8d5f3" Jan 23 07:08:08 crc kubenswrapper[4937]: I0123 07:08:08.539874 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="495ea696-2703-4613-ac3f-390fce312e4f" path="/var/lib/kubelet/pods/495ea696-2703-4613-ac3f-390fce312e4f/volumes" Jan 23 07:08:11 crc kubenswrapper[4937]: I0123 07:08:11.560383 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8v9vg"] Jan 23 07:08:11 crc kubenswrapper[4937]: I0123 07:08:11.564326 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8v9vg" Jan 23 07:08:11 crc kubenswrapper[4937]: I0123 07:08:11.591627 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l26g6\" (UniqueName: \"kubernetes.io/projected/46445e06-1fe3-4942-b15f-f4c412aa45c1-kube-api-access-l26g6\") pod \"redhat-operators-8v9vg\" (UID: \"46445e06-1fe3-4942-b15f-f4c412aa45c1\") " pod="openshift-marketplace/redhat-operators-8v9vg" Jan 23 07:08:11 crc kubenswrapper[4937]: I0123 07:08:11.592366 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46445e06-1fe3-4942-b15f-f4c412aa45c1-utilities\") pod \"redhat-operators-8v9vg\" (UID: \"46445e06-1fe3-4942-b15f-f4c412aa45c1\") " pod="openshift-marketplace/redhat-operators-8v9vg" Jan 23 07:08:11 crc kubenswrapper[4937]: I0123 07:08:11.592499 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46445e06-1fe3-4942-b15f-f4c412aa45c1-catalog-content\") pod \"redhat-operators-8v9vg\" (UID: \"46445e06-1fe3-4942-b15f-f4c412aa45c1\") " pod="openshift-marketplace/redhat-operators-8v9vg" Jan 23 07:08:11 crc kubenswrapper[4937]: I0123 07:08:11.591640 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8v9vg"] Jan 23 07:08:11 crc kubenswrapper[4937]: I0123 07:08:11.693794 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46445e06-1fe3-4942-b15f-f4c412aa45c1-utilities\") pod \"redhat-operators-8v9vg\" (UID: \"46445e06-1fe3-4942-b15f-f4c412aa45c1\") " pod="openshift-marketplace/redhat-operators-8v9vg" Jan 23 07:08:11 crc kubenswrapper[4937]: I0123 07:08:11.693862 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46445e06-1fe3-4942-b15f-f4c412aa45c1-catalog-content\") pod \"redhat-operators-8v9vg\" (UID: \"46445e06-1fe3-4942-b15f-f4c412aa45c1\") " pod="openshift-marketplace/redhat-operators-8v9vg" Jan 23 07:08:11 crc kubenswrapper[4937]: I0123 07:08:11.693917 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l26g6\" (UniqueName: \"kubernetes.io/projected/46445e06-1fe3-4942-b15f-f4c412aa45c1-kube-api-access-l26g6\") pod \"redhat-operators-8v9vg\" (UID: \"46445e06-1fe3-4942-b15f-f4c412aa45c1\") " pod="openshift-marketplace/redhat-operators-8v9vg" Jan 23 07:08:11 crc kubenswrapper[4937]: I0123 07:08:11.694576 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46445e06-1fe3-4942-b15f-f4c412aa45c1-utilities\") pod \"redhat-operators-8v9vg\" (UID: \"46445e06-1fe3-4942-b15f-f4c412aa45c1\") " pod="openshift-marketplace/redhat-operators-8v9vg" Jan 23 07:08:11 crc kubenswrapper[4937]: I0123 07:08:11.694839 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46445e06-1fe3-4942-b15f-f4c412aa45c1-catalog-content\") pod \"redhat-operators-8v9vg\" (UID: \"46445e06-1fe3-4942-b15f-f4c412aa45c1\") " pod="openshift-marketplace/redhat-operators-8v9vg" Jan 23 07:08:11 crc kubenswrapper[4937]: I0123 07:08:11.718465 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l26g6\" (UniqueName: \"kubernetes.io/projected/46445e06-1fe3-4942-b15f-f4c412aa45c1-kube-api-access-l26g6\") pod \"redhat-operators-8v9vg\" (UID: \"46445e06-1fe3-4942-b15f-f4c412aa45c1\") " pod="openshift-marketplace/redhat-operators-8v9vg" Jan 23 07:08:11 crc kubenswrapper[4937]: I0123 07:08:11.892539 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8v9vg" Jan 23 07:08:12 crc kubenswrapper[4937]: I0123 07:08:12.366992 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8v9vg"] Jan 23 07:08:12 crc kubenswrapper[4937]: W0123 07:08:12.369368 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46445e06_1fe3_4942_b15f_f4c412aa45c1.slice/crio-f8d6afd06edc7e7e356617ef3dee3bed0dc8a58d9affde649874327d63d3b6f9 WatchSource:0}: Error finding container f8d6afd06edc7e7e356617ef3dee3bed0dc8a58d9affde649874327d63d3b6f9: Status 404 returned error can't find the container with id f8d6afd06edc7e7e356617ef3dee3bed0dc8a58d9affde649874327d63d3b6f9 Jan 23 07:08:12 crc kubenswrapper[4937]: I0123 07:08:12.648365 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v9vg" event={"ID":"46445e06-1fe3-4942-b15f-f4c412aa45c1","Type":"ContainerStarted","Data":"f8d6afd06edc7e7e356617ef3dee3bed0dc8a58d9affde649874327d63d3b6f9"} Jan 23 07:08:13 crc kubenswrapper[4937]: I0123 07:08:13.660408 4937 generic.go:334] "Generic (PLEG): container finished" podID="46445e06-1fe3-4942-b15f-f4c412aa45c1" containerID="38d9a28f77f2c9e4131919a080d2b0057f94a896a92e3c393d4eb17a3d8f5e90" exitCode=0 Jan 23 07:08:13 crc kubenswrapper[4937]: I0123 07:08:13.660450 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v9vg" event={"ID":"46445e06-1fe3-4942-b15f-f4c412aa45c1","Type":"ContainerDied","Data":"38d9a28f77f2c9e4131919a080d2b0057f94a896a92e3c393d4eb17a3d8f5e90"} Jan 23 07:08:15 crc kubenswrapper[4937]: I0123 07:08:15.685980 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v9vg" event={"ID":"46445e06-1fe3-4942-b15f-f4c412aa45c1","Type":"ContainerStarted","Data":"5bdd82751b894ded5382428bd38b53d80704fe2e7af7b1f1d10ceb02dc61e5e7"} Jan 23 07:08:18 crc kubenswrapper[4937]: I0123 07:08:18.715729 4937 generic.go:334] "Generic (PLEG): container finished" podID="46445e06-1fe3-4942-b15f-f4c412aa45c1" containerID="5bdd82751b894ded5382428bd38b53d80704fe2e7af7b1f1d10ceb02dc61e5e7" exitCode=0 Jan 23 07:08:18 crc kubenswrapper[4937]: I0123 07:08:18.715757 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v9vg" event={"ID":"46445e06-1fe3-4942-b15f-f4c412aa45c1","Type":"ContainerDied","Data":"5bdd82751b894ded5382428bd38b53d80704fe2e7af7b1f1d10ceb02dc61e5e7"} Jan 23 07:08:19 crc kubenswrapper[4937]: I0123 07:08:19.738170 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v9vg" event={"ID":"46445e06-1fe3-4942-b15f-f4c412aa45c1","Type":"ContainerStarted","Data":"4fb7c21ffdf6a878bb2cc9acadb8c51a88a7ff63cd3676d159bedeb46bb1bc6f"} Jan 23 07:08:19 crc kubenswrapper[4937]: I0123 07:08:19.760713 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8v9vg" podStartSLOduration=3.193266791 podStartE2EDuration="8.760692355s" podCreationTimestamp="2026-01-23 07:08:11 +0000 UTC" firstStartedPulling="2026-01-23 07:08:13.663032126 +0000 UTC m=+2093.466798779" lastFinishedPulling="2026-01-23 07:08:19.23045768 +0000 UTC m=+2099.034224343" observedRunningTime="2026-01-23 07:08:19.755976507 +0000 UTC m=+2099.559743170" watchObservedRunningTime="2026-01-23 07:08:19.760692355 +0000 UTC m=+2099.564459008" Jan 23 07:08:21 crc kubenswrapper[4937]: I0123 07:08:21.894045 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8v9vg" Jan 23 07:08:21 crc kubenswrapper[4937]: I0123 07:08:21.894675 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8v9vg" Jan 23 07:08:22 crc kubenswrapper[4937]: I0123 07:08:22.941243 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8v9vg" podUID="46445e06-1fe3-4942-b15f-f4c412aa45c1" containerName="registry-server" probeResult="failure" output=< Jan 23 07:08:22 crc kubenswrapper[4937]: timeout: failed to connect service ":50051" within 1s Jan 23 07:08:22 crc kubenswrapper[4937]: > Jan 23 07:08:31 crc kubenswrapper[4937]: I0123 07:08:31.943219 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8v9vg" Jan 23 07:08:31 crc kubenswrapper[4937]: I0123 07:08:31.994476 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8v9vg" Jan 23 07:08:32 crc kubenswrapper[4937]: I0123 07:08:32.177983 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8v9vg"] Jan 23 07:08:33 crc kubenswrapper[4937]: I0123 07:08:33.866523 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8v9vg" podUID="46445e06-1fe3-4942-b15f-f4c412aa45c1" containerName="registry-server" containerID="cri-o://4fb7c21ffdf6a878bb2cc9acadb8c51a88a7ff63cd3676d159bedeb46bb1bc6f" gracePeriod=2 Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.368082 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8v9vg" Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.499415 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46445e06-1fe3-4942-b15f-f4c412aa45c1-utilities\") pod \"46445e06-1fe3-4942-b15f-f4c412aa45c1\" (UID: \"46445e06-1fe3-4942-b15f-f4c412aa45c1\") " Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.499832 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46445e06-1fe3-4942-b15f-f4c412aa45c1-catalog-content\") pod \"46445e06-1fe3-4942-b15f-f4c412aa45c1\" (UID: \"46445e06-1fe3-4942-b15f-f4c412aa45c1\") " Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.500033 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l26g6\" (UniqueName: \"kubernetes.io/projected/46445e06-1fe3-4942-b15f-f4c412aa45c1-kube-api-access-l26g6\") pod \"46445e06-1fe3-4942-b15f-f4c412aa45c1\" (UID: \"46445e06-1fe3-4942-b15f-f4c412aa45c1\") " Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.500439 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46445e06-1fe3-4942-b15f-f4c412aa45c1-utilities" (OuterVolumeSpecName: "utilities") pod "46445e06-1fe3-4942-b15f-f4c412aa45c1" (UID: "46445e06-1fe3-4942-b15f-f4c412aa45c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.500752 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46445e06-1fe3-4942-b15f-f4c412aa45c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.506780 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46445e06-1fe3-4942-b15f-f4c412aa45c1-kube-api-access-l26g6" (OuterVolumeSpecName: "kube-api-access-l26g6") pod "46445e06-1fe3-4942-b15f-f4c412aa45c1" (UID: "46445e06-1fe3-4942-b15f-f4c412aa45c1"). InnerVolumeSpecName "kube-api-access-l26g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.602543 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l26g6\" (UniqueName: \"kubernetes.io/projected/46445e06-1fe3-4942-b15f-f4c412aa45c1-kube-api-access-l26g6\") on node \"crc\" DevicePath \"\"" Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.638310 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46445e06-1fe3-4942-b15f-f4c412aa45c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46445e06-1fe3-4942-b15f-f4c412aa45c1" (UID: "46445e06-1fe3-4942-b15f-f4c412aa45c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.704639 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46445e06-1fe3-4942-b15f-f4c412aa45c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.878480 4937 generic.go:334] "Generic (PLEG): container finished" podID="46445e06-1fe3-4942-b15f-f4c412aa45c1" containerID="4fb7c21ffdf6a878bb2cc9acadb8c51a88a7ff63cd3676d159bedeb46bb1bc6f" exitCode=0 Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.878531 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v9vg" event={"ID":"46445e06-1fe3-4942-b15f-f4c412aa45c1","Type":"ContainerDied","Data":"4fb7c21ffdf6a878bb2cc9acadb8c51a88a7ff63cd3676d159bedeb46bb1bc6f"} Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.878568 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v9vg" event={"ID":"46445e06-1fe3-4942-b15f-f4c412aa45c1","Type":"ContainerDied","Data":"f8d6afd06edc7e7e356617ef3dee3bed0dc8a58d9affde649874327d63d3b6f9"} Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.878567 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8v9vg" Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.878610 4937 scope.go:117] "RemoveContainer" containerID="4fb7c21ffdf6a878bb2cc9acadb8c51a88a7ff63cd3676d159bedeb46bb1bc6f" Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.904204 4937 scope.go:117] "RemoveContainer" containerID="5bdd82751b894ded5382428bd38b53d80704fe2e7af7b1f1d10ceb02dc61e5e7" Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.913344 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8v9vg"] Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.921211 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8v9vg"] Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.931495 4937 scope.go:117] "RemoveContainer" containerID="38d9a28f77f2c9e4131919a080d2b0057f94a896a92e3c393d4eb17a3d8f5e90" Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.969641 4937 scope.go:117] "RemoveContainer" containerID="4fb7c21ffdf6a878bb2cc9acadb8c51a88a7ff63cd3676d159bedeb46bb1bc6f" Jan 23 07:08:34 crc kubenswrapper[4937]: E0123 07:08:34.970168 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fb7c21ffdf6a878bb2cc9acadb8c51a88a7ff63cd3676d159bedeb46bb1bc6f\": container with ID starting with 4fb7c21ffdf6a878bb2cc9acadb8c51a88a7ff63cd3676d159bedeb46bb1bc6f not found: ID does not exist" containerID="4fb7c21ffdf6a878bb2cc9acadb8c51a88a7ff63cd3676d159bedeb46bb1bc6f" Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.970202 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fb7c21ffdf6a878bb2cc9acadb8c51a88a7ff63cd3676d159bedeb46bb1bc6f"} err="failed to get container status \"4fb7c21ffdf6a878bb2cc9acadb8c51a88a7ff63cd3676d159bedeb46bb1bc6f\": rpc error: code = NotFound desc = could not find container \"4fb7c21ffdf6a878bb2cc9acadb8c51a88a7ff63cd3676d159bedeb46bb1bc6f\": container with ID starting with 4fb7c21ffdf6a878bb2cc9acadb8c51a88a7ff63cd3676d159bedeb46bb1bc6f not found: ID does not exist" Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.970226 4937 scope.go:117] "RemoveContainer" containerID="5bdd82751b894ded5382428bd38b53d80704fe2e7af7b1f1d10ceb02dc61e5e7" Jan 23 07:08:34 crc kubenswrapper[4937]: E0123 07:08:34.970564 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bdd82751b894ded5382428bd38b53d80704fe2e7af7b1f1d10ceb02dc61e5e7\": container with ID starting with 5bdd82751b894ded5382428bd38b53d80704fe2e7af7b1f1d10ceb02dc61e5e7 not found: ID does not exist" containerID="5bdd82751b894ded5382428bd38b53d80704fe2e7af7b1f1d10ceb02dc61e5e7" Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.970605 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bdd82751b894ded5382428bd38b53d80704fe2e7af7b1f1d10ceb02dc61e5e7"} err="failed to get container status \"5bdd82751b894ded5382428bd38b53d80704fe2e7af7b1f1d10ceb02dc61e5e7\": rpc error: code = NotFound desc = could not find container \"5bdd82751b894ded5382428bd38b53d80704fe2e7af7b1f1d10ceb02dc61e5e7\": container with ID starting with 5bdd82751b894ded5382428bd38b53d80704fe2e7af7b1f1d10ceb02dc61e5e7 not found: ID does not exist" Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.970625 4937 scope.go:117] "RemoveContainer" containerID="38d9a28f77f2c9e4131919a080d2b0057f94a896a92e3c393d4eb17a3d8f5e90" Jan 23 07:08:34 crc kubenswrapper[4937]: E0123 07:08:34.970916 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38d9a28f77f2c9e4131919a080d2b0057f94a896a92e3c393d4eb17a3d8f5e90\": container with ID starting with 38d9a28f77f2c9e4131919a080d2b0057f94a896a92e3c393d4eb17a3d8f5e90 not found: ID does not exist" containerID="38d9a28f77f2c9e4131919a080d2b0057f94a896a92e3c393d4eb17a3d8f5e90" Jan 23 07:08:34 crc kubenswrapper[4937]: I0123 07:08:34.970944 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38d9a28f77f2c9e4131919a080d2b0057f94a896a92e3c393d4eb17a3d8f5e90"} err="failed to get container status \"38d9a28f77f2c9e4131919a080d2b0057f94a896a92e3c393d4eb17a3d8f5e90\": rpc error: code = NotFound desc = could not find container \"38d9a28f77f2c9e4131919a080d2b0057f94a896a92e3c393d4eb17a3d8f5e90\": container with ID starting with 38d9a28f77f2c9e4131919a080d2b0057f94a896a92e3c393d4eb17a3d8f5e90 not found: ID does not exist" Jan 23 07:08:36 crc kubenswrapper[4937]: I0123 07:08:36.537174 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46445e06-1fe3-4942-b15f-f4c412aa45c1" path="/var/lib/kubelet/pods/46445e06-1fe3-4942-b15f-f4c412aa45c1/volumes" Jan 23 07:08:51 crc kubenswrapper[4937]: I0123 07:08:51.052272 4937 generic.go:334] "Generic (PLEG): container finished" podID="9ecb96e5-109f-44d6-8f2e-ea090aa26541" containerID="8583af27b893f037adc054bdcbb334877e6ccd8428f1bc72573aee3b8c201caf" exitCode=0 Jan 23 07:08:51 crc kubenswrapper[4937]: I0123 07:08:51.052360 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2" event={"ID":"9ecb96e5-109f-44d6-8f2e-ea090aa26541","Type":"ContainerDied","Data":"8583af27b893f037adc054bdcbb334877e6ccd8428f1bc72573aee3b8c201caf"} Jan 23 07:08:52 crc kubenswrapper[4937]: I0123 07:08:52.524255 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2" Jan 23 07:08:52 crc kubenswrapper[4937]: I0123 07:08:52.663997 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcl9f\" (UniqueName: \"kubernetes.io/projected/9ecb96e5-109f-44d6-8f2e-ea090aa26541-kube-api-access-jcl9f\") pod \"9ecb96e5-109f-44d6-8f2e-ea090aa26541\" (UID: \"9ecb96e5-109f-44d6-8f2e-ea090aa26541\") " Jan 23 07:08:52 crc kubenswrapper[4937]: I0123 07:08:52.664052 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ecb96e5-109f-44d6-8f2e-ea090aa26541-inventory\") pod \"9ecb96e5-109f-44d6-8f2e-ea090aa26541\" (UID: \"9ecb96e5-109f-44d6-8f2e-ea090aa26541\") " Jan 23 07:08:52 crc kubenswrapper[4937]: I0123 07:08:52.664110 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ecb96e5-109f-44d6-8f2e-ea090aa26541-ssh-key-openstack-edpm-ipam\") pod \"9ecb96e5-109f-44d6-8f2e-ea090aa26541\" (UID: \"9ecb96e5-109f-44d6-8f2e-ea090aa26541\") " Jan 23 07:08:52 crc kubenswrapper[4937]: I0123 07:08:52.670852 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ecb96e5-109f-44d6-8f2e-ea090aa26541-kube-api-access-jcl9f" (OuterVolumeSpecName: "kube-api-access-jcl9f") pod "9ecb96e5-109f-44d6-8f2e-ea090aa26541" (UID: "9ecb96e5-109f-44d6-8f2e-ea090aa26541"). InnerVolumeSpecName "kube-api-access-jcl9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:08:52 crc kubenswrapper[4937]: I0123 07:08:52.694162 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ecb96e5-109f-44d6-8f2e-ea090aa26541-inventory" (OuterVolumeSpecName: "inventory") pod "9ecb96e5-109f-44d6-8f2e-ea090aa26541" (UID: "9ecb96e5-109f-44d6-8f2e-ea090aa26541"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:08:52 crc kubenswrapper[4937]: I0123 07:08:52.699882 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ecb96e5-109f-44d6-8f2e-ea090aa26541-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9ecb96e5-109f-44d6-8f2e-ea090aa26541" (UID: "9ecb96e5-109f-44d6-8f2e-ea090aa26541"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:08:52 crc kubenswrapper[4937]: I0123 07:08:52.766554 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcl9f\" (UniqueName: \"kubernetes.io/projected/9ecb96e5-109f-44d6-8f2e-ea090aa26541-kube-api-access-jcl9f\") on node \"crc\" DevicePath \"\"" Jan 23 07:08:52 crc kubenswrapper[4937]: I0123 07:08:52.766601 4937 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ecb96e5-109f-44d6-8f2e-ea090aa26541-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 07:08:52 crc kubenswrapper[4937]: I0123 07:08:52.766613 4937 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ecb96e5-109f-44d6-8f2e-ea090aa26541-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.074391 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2" event={"ID":"9ecb96e5-109f-44d6-8f2e-ea090aa26541","Type":"ContainerDied","Data":"af47faac3c6c603ca96558ccad25b9957637979be4be53fe8918a610288d6e38"} Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.074438 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af47faac3c6c603ca96558ccad25b9957637979be4be53fe8918a610288d6e38" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.074503 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-45kn2" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.159101 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pzgmd"] Jan 23 07:08:53 crc kubenswrapper[4937]: E0123 07:08:53.159562 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46445e06-1fe3-4942-b15f-f4c412aa45c1" containerName="registry-server" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.159583 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="46445e06-1fe3-4942-b15f-f4c412aa45c1" containerName="registry-server" Jan 23 07:08:53 crc kubenswrapper[4937]: E0123 07:08:53.159622 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46445e06-1fe3-4942-b15f-f4c412aa45c1" containerName="extract-utilities" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.159631 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="46445e06-1fe3-4942-b15f-f4c412aa45c1" containerName="extract-utilities" Jan 23 07:08:53 crc kubenswrapper[4937]: E0123 07:08:53.159643 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ecb96e5-109f-44d6-8f2e-ea090aa26541" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.159652 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ecb96e5-109f-44d6-8f2e-ea090aa26541" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 07:08:53 crc kubenswrapper[4937]: E0123 07:08:53.159677 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46445e06-1fe3-4942-b15f-f4c412aa45c1" containerName="extract-content" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.159684 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="46445e06-1fe3-4942-b15f-f4c412aa45c1" containerName="extract-content" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.159908 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="46445e06-1fe3-4942-b15f-f4c412aa45c1" containerName="registry-server" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.159927 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ecb96e5-109f-44d6-8f2e-ea090aa26541" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.160726 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pzgmd" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.170822 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pzgmd"] Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.172101 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.172141 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.172949 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.173243 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rm45q" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.174731 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f028edf4-c095-4de0-9999-c7cec222593f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pzgmd\" (UID: \"f028edf4-c095-4de0-9999-c7cec222593f\") " pod="openstack/ssh-known-hosts-edpm-deployment-pzgmd" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.174914 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf8rb\" (UniqueName: \"kubernetes.io/projected/f028edf4-c095-4de0-9999-c7cec222593f-kube-api-access-pf8rb\") pod \"ssh-known-hosts-edpm-deployment-pzgmd\" (UID: \"f028edf4-c095-4de0-9999-c7cec222593f\") " pod="openstack/ssh-known-hosts-edpm-deployment-pzgmd" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.175134 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f028edf4-c095-4de0-9999-c7cec222593f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pzgmd\" (UID: \"f028edf4-c095-4de0-9999-c7cec222593f\") " pod="openstack/ssh-known-hosts-edpm-deployment-pzgmd" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.283564 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f028edf4-c095-4de0-9999-c7cec222593f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pzgmd\" (UID: \"f028edf4-c095-4de0-9999-c7cec222593f\") " pod="openstack/ssh-known-hosts-edpm-deployment-pzgmd" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.283902 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf8rb\" (UniqueName: \"kubernetes.io/projected/f028edf4-c095-4de0-9999-c7cec222593f-kube-api-access-pf8rb\") pod \"ssh-known-hosts-edpm-deployment-pzgmd\" (UID: \"f028edf4-c095-4de0-9999-c7cec222593f\") " pod="openstack/ssh-known-hosts-edpm-deployment-pzgmd" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.284039 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f028edf4-c095-4de0-9999-c7cec222593f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pzgmd\" (UID: \"f028edf4-c095-4de0-9999-c7cec222593f\") " pod="openstack/ssh-known-hosts-edpm-deployment-pzgmd" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.290140 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f028edf4-c095-4de0-9999-c7cec222593f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pzgmd\" (UID: \"f028edf4-c095-4de0-9999-c7cec222593f\") " pod="openstack/ssh-known-hosts-edpm-deployment-pzgmd" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.291894 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f028edf4-c095-4de0-9999-c7cec222593f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pzgmd\" (UID: \"f028edf4-c095-4de0-9999-c7cec222593f\") " pod="openstack/ssh-known-hosts-edpm-deployment-pzgmd" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.300800 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf8rb\" (UniqueName: \"kubernetes.io/projected/f028edf4-c095-4de0-9999-c7cec222593f-kube-api-access-pf8rb\") pod \"ssh-known-hosts-edpm-deployment-pzgmd\" (UID: \"f028edf4-c095-4de0-9999-c7cec222593f\") " pod="openstack/ssh-known-hosts-edpm-deployment-pzgmd" Jan 23 07:08:53 crc kubenswrapper[4937]: I0123 07:08:53.480336 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pzgmd" Jan 23 07:08:54 crc kubenswrapper[4937]: I0123 07:08:54.054538 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pzgmd"] Jan 23 07:08:54 crc kubenswrapper[4937]: I0123 07:08:54.087028 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pzgmd" event={"ID":"f028edf4-c095-4de0-9999-c7cec222593f","Type":"ContainerStarted","Data":"5546ca92a37670d892c4b04d0f2e6d1c7e6d4253cd7b6c741342269d36b12db0"} Jan 23 07:08:55 crc kubenswrapper[4937]: I0123 07:08:55.102278 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pzgmd" event={"ID":"f028edf4-c095-4de0-9999-c7cec222593f","Type":"ContainerStarted","Data":"bba7afcc022a8516f39e6c40d7e455af4e3f2f70f681c4a02d2c0bcfddd520fa"} Jan 23 07:08:55 crc kubenswrapper[4937]: I0123 07:08:55.134549 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-pzgmd" podStartSLOduration=1.5228201860000001 podStartE2EDuration="2.134522446s" podCreationTimestamp="2026-01-23 07:08:53 +0000 UTC" firstStartedPulling="2026-01-23 07:08:54.054459433 +0000 UTC m=+2133.858226086" lastFinishedPulling="2026-01-23 07:08:54.666161663 +0000 UTC m=+2134.469928346" observedRunningTime="2026-01-23 07:08:55.131548294 +0000 UTC m=+2134.935314977" watchObservedRunningTime="2026-01-23 07:08:55.134522446 +0000 UTC m=+2134.938289099" Jan 23 07:09:02 crc kubenswrapper[4937]: I0123 07:09:02.187835 4937 generic.go:334] "Generic (PLEG): container finished" podID="f028edf4-c095-4de0-9999-c7cec222593f" containerID="bba7afcc022a8516f39e6c40d7e455af4e3f2f70f681c4a02d2c0bcfddd520fa" exitCode=0 Jan 23 07:09:02 crc kubenswrapper[4937]: I0123 07:09:02.188540 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pzgmd" event={"ID":"f028edf4-c095-4de0-9999-c7cec222593f","Type":"ContainerDied","Data":"bba7afcc022a8516f39e6c40d7e455af4e3f2f70f681c4a02d2c0bcfddd520fa"} Jan 23 07:09:03 crc kubenswrapper[4937]: I0123 07:09:03.746241 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pzgmd" Jan 23 07:09:03 crc kubenswrapper[4937]: I0123 07:09:03.792411 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf8rb\" (UniqueName: \"kubernetes.io/projected/f028edf4-c095-4de0-9999-c7cec222593f-kube-api-access-pf8rb\") pod \"f028edf4-c095-4de0-9999-c7cec222593f\" (UID: \"f028edf4-c095-4de0-9999-c7cec222593f\") " Jan 23 07:09:03 crc kubenswrapper[4937]: I0123 07:09:03.792491 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f028edf4-c095-4de0-9999-c7cec222593f-ssh-key-openstack-edpm-ipam\") pod \"f028edf4-c095-4de0-9999-c7cec222593f\" (UID: \"f028edf4-c095-4de0-9999-c7cec222593f\") " Jan 23 07:09:03 crc kubenswrapper[4937]: I0123 07:09:03.792610 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f028edf4-c095-4de0-9999-c7cec222593f-inventory-0\") pod \"f028edf4-c095-4de0-9999-c7cec222593f\" (UID: \"f028edf4-c095-4de0-9999-c7cec222593f\") " Jan 23 07:09:03 crc kubenswrapper[4937]: I0123 07:09:03.801413 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f028edf4-c095-4de0-9999-c7cec222593f-kube-api-access-pf8rb" (OuterVolumeSpecName: "kube-api-access-pf8rb") pod "f028edf4-c095-4de0-9999-c7cec222593f" (UID: "f028edf4-c095-4de0-9999-c7cec222593f"). InnerVolumeSpecName "kube-api-access-pf8rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:09:03 crc kubenswrapper[4937]: I0123 07:09:03.823085 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f028edf4-c095-4de0-9999-c7cec222593f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f028edf4-c095-4de0-9999-c7cec222593f" (UID: "f028edf4-c095-4de0-9999-c7cec222593f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:09:03 crc kubenswrapper[4937]: I0123 07:09:03.824303 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f028edf4-c095-4de0-9999-c7cec222593f-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "f028edf4-c095-4de0-9999-c7cec222593f" (UID: "f028edf4-c095-4de0-9999-c7cec222593f"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:09:03 crc kubenswrapper[4937]: I0123 07:09:03.897061 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf8rb\" (UniqueName: \"kubernetes.io/projected/f028edf4-c095-4de0-9999-c7cec222593f-kube-api-access-pf8rb\") on node \"crc\" DevicePath \"\"" Jan 23 07:09:03 crc kubenswrapper[4937]: I0123 07:09:03.897098 4937 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f028edf4-c095-4de0-9999-c7cec222593f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:09:03 crc kubenswrapper[4937]: I0123 07:09:03.897113 4937 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f028edf4-c095-4de0-9999-c7cec222593f-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.209569 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pzgmd" event={"ID":"f028edf4-c095-4de0-9999-c7cec222593f","Type":"ContainerDied","Data":"5546ca92a37670d892c4b04d0f2e6d1c7e6d4253cd7b6c741342269d36b12db0"} Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.209638 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5546ca92a37670d892c4b04d0f2e6d1c7e6d4253cd7b6c741342269d36b12db0" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.209705 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pzgmd" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.310977 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb"] Jan 23 07:09:04 crc kubenswrapper[4937]: E0123 07:09:04.312113 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f028edf4-c095-4de0-9999-c7cec222593f" containerName="ssh-known-hosts-edpm-deployment" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.312165 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f028edf4-c095-4de0-9999-c7cec222593f" containerName="ssh-known-hosts-edpm-deployment" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.312705 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="f028edf4-c095-4de0-9999-c7cec222593f" containerName="ssh-known-hosts-edpm-deployment" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.315071 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.319232 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rm45q" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.319317 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.319696 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.320089 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.326793 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb"] Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.406578 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j84xh\" (UniqueName: \"kubernetes.io/projected/1a60b146-0a1a-4ecb-a49a-cd6af7a60a45-kube-api-access-j84xh\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tkjjb\" (UID: \"1a60b146-0a1a-4ecb-a49a-cd6af7a60a45\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.406664 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a60b146-0a1a-4ecb-a49a-cd6af7a60a45-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tkjjb\" (UID: \"1a60b146-0a1a-4ecb-a49a-cd6af7a60a45\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.406930 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a60b146-0a1a-4ecb-a49a-cd6af7a60a45-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tkjjb\" (UID: \"1a60b146-0a1a-4ecb-a49a-cd6af7a60a45\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.510088 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a60b146-0a1a-4ecb-a49a-cd6af7a60a45-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tkjjb\" (UID: \"1a60b146-0a1a-4ecb-a49a-cd6af7a60a45\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.510231 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a60b146-0a1a-4ecb-a49a-cd6af7a60a45-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tkjjb\" (UID: \"1a60b146-0a1a-4ecb-a49a-cd6af7a60a45\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.511805 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j84xh\" (UniqueName: \"kubernetes.io/projected/1a60b146-0a1a-4ecb-a49a-cd6af7a60a45-kube-api-access-j84xh\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tkjjb\" (UID: \"1a60b146-0a1a-4ecb-a49a-cd6af7a60a45\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.515055 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a60b146-0a1a-4ecb-a49a-cd6af7a60a45-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tkjjb\" (UID: \"1a60b146-0a1a-4ecb-a49a-cd6af7a60a45\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.520221 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a60b146-0a1a-4ecb-a49a-cd6af7a60a45-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tkjjb\" (UID: \"1a60b146-0a1a-4ecb-a49a-cd6af7a60a45\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.527763 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j84xh\" (UniqueName: \"kubernetes.io/projected/1a60b146-0a1a-4ecb-a49a-cd6af7a60a45-kube-api-access-j84xh\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tkjjb\" (UID: \"1a60b146-0a1a-4ecb-a49a-cd6af7a60a45\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb" Jan 23 07:09:04 crc kubenswrapper[4937]: I0123 07:09:04.639248 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb" Jan 23 07:09:05 crc kubenswrapper[4937]: I0123 07:09:05.149983 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb"] Jan 23 07:09:05 crc kubenswrapper[4937]: I0123 07:09:05.219978 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb" event={"ID":"1a60b146-0a1a-4ecb-a49a-cd6af7a60a45","Type":"ContainerStarted","Data":"122c8a5c5a6d36a1eba4dfe248307021f6b00c7bfab4d31dd9c2f0c88aec4fe4"} Jan 23 07:09:06 crc kubenswrapper[4937]: I0123 07:09:06.228428 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb" event={"ID":"1a60b146-0a1a-4ecb-a49a-cd6af7a60a45","Type":"ContainerStarted","Data":"12308e530aeb7c268f0527a4d7f2826600d0b512b14332b43e626b7b5d38dc75"} Jan 23 07:09:06 crc kubenswrapper[4937]: I0123 07:09:06.254857 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb" podStartSLOduration=1.754168637 podStartE2EDuration="2.254838168s" podCreationTimestamp="2026-01-23 07:09:04 +0000 UTC" firstStartedPulling="2026-01-23 07:09:05.167685313 +0000 UTC m=+2144.971451976" lastFinishedPulling="2026-01-23 07:09:05.668354864 +0000 UTC m=+2145.472121507" observedRunningTime="2026-01-23 07:09:06.246857441 +0000 UTC m=+2146.050624134" watchObservedRunningTime="2026-01-23 07:09:06.254838168 +0000 UTC m=+2146.058604841" Jan 23 07:09:08 crc kubenswrapper[4937]: I0123 07:09:08.125675 4937 scope.go:117] "RemoveContainer" containerID="c28351eb66419de87a5c798d42d8b1b2d6118e2834e10355fc9062b3d5100fff" Jan 23 07:09:14 crc kubenswrapper[4937]: I0123 07:09:14.299816 4937 generic.go:334] "Generic (PLEG): container finished" podID="1a60b146-0a1a-4ecb-a49a-cd6af7a60a45" containerID="12308e530aeb7c268f0527a4d7f2826600d0b512b14332b43e626b7b5d38dc75" exitCode=0 Jan 23 07:09:14 crc kubenswrapper[4937]: I0123 07:09:14.300332 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb" event={"ID":"1a60b146-0a1a-4ecb-a49a-cd6af7a60a45","Type":"ContainerDied","Data":"12308e530aeb7c268f0527a4d7f2826600d0b512b14332b43e626b7b5d38dc75"} Jan 23 07:09:15 crc kubenswrapper[4937]: I0123 07:09:15.738585 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb" Jan 23 07:09:15 crc kubenswrapper[4937]: I0123 07:09:15.849374 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a60b146-0a1a-4ecb-a49a-cd6af7a60a45-inventory\") pod \"1a60b146-0a1a-4ecb-a49a-cd6af7a60a45\" (UID: \"1a60b146-0a1a-4ecb-a49a-cd6af7a60a45\") " Jan 23 07:09:15 crc kubenswrapper[4937]: I0123 07:09:15.849774 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j84xh\" (UniqueName: \"kubernetes.io/projected/1a60b146-0a1a-4ecb-a49a-cd6af7a60a45-kube-api-access-j84xh\") pod \"1a60b146-0a1a-4ecb-a49a-cd6af7a60a45\" (UID: \"1a60b146-0a1a-4ecb-a49a-cd6af7a60a45\") " Jan 23 07:09:15 crc kubenswrapper[4937]: I0123 07:09:15.850035 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a60b146-0a1a-4ecb-a49a-cd6af7a60a45-ssh-key-openstack-edpm-ipam\") pod \"1a60b146-0a1a-4ecb-a49a-cd6af7a60a45\" (UID: \"1a60b146-0a1a-4ecb-a49a-cd6af7a60a45\") " Jan 23 07:09:15 crc kubenswrapper[4937]: I0123 07:09:15.855437 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a60b146-0a1a-4ecb-a49a-cd6af7a60a45-kube-api-access-j84xh" (OuterVolumeSpecName: "kube-api-access-j84xh") pod "1a60b146-0a1a-4ecb-a49a-cd6af7a60a45" (UID: "1a60b146-0a1a-4ecb-a49a-cd6af7a60a45"). InnerVolumeSpecName "kube-api-access-j84xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:09:15 crc kubenswrapper[4937]: I0123 07:09:15.877868 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a60b146-0a1a-4ecb-a49a-cd6af7a60a45-inventory" (OuterVolumeSpecName: "inventory") pod "1a60b146-0a1a-4ecb-a49a-cd6af7a60a45" (UID: "1a60b146-0a1a-4ecb-a49a-cd6af7a60a45"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:09:15 crc kubenswrapper[4937]: I0123 07:09:15.878251 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a60b146-0a1a-4ecb-a49a-cd6af7a60a45-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1a60b146-0a1a-4ecb-a49a-cd6af7a60a45" (UID: "1a60b146-0a1a-4ecb-a49a-cd6af7a60a45"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:09:15 crc kubenswrapper[4937]: I0123 07:09:15.953069 4937 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a60b146-0a1a-4ecb-a49a-cd6af7a60a45-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 07:09:15 crc kubenswrapper[4937]: I0123 07:09:15.953104 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j84xh\" (UniqueName: \"kubernetes.io/projected/1a60b146-0a1a-4ecb-a49a-cd6af7a60a45-kube-api-access-j84xh\") on node \"crc\" DevicePath \"\"" Jan 23 07:09:15 crc kubenswrapper[4937]: I0123 07:09:15.953119 4937 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a60b146-0a1a-4ecb-a49a-cd6af7a60a45-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.319336 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb" event={"ID":"1a60b146-0a1a-4ecb-a49a-cd6af7a60a45","Type":"ContainerDied","Data":"122c8a5c5a6d36a1eba4dfe248307021f6b00c7bfab4d31dd9c2f0c88aec4fe4"} Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.319405 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="122c8a5c5a6d36a1eba4dfe248307021f6b00c7bfab4d31dd9c2f0c88aec4fe4" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.319776 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tkjjb" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.392968 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4"] Jan 23 07:09:16 crc kubenswrapper[4937]: E0123 07:09:16.393476 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a60b146-0a1a-4ecb-a49a-cd6af7a60a45" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.393500 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a60b146-0a1a-4ecb-a49a-cd6af7a60a45" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.393951 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a60b146-0a1a-4ecb-a49a-cd6af7a60a45" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.394827 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.397746 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.397786 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rm45q" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.397989 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.398088 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.402237 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4"] Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.573049 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c096740b-e5ec-44ee-aba1-16567d10dd18-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4\" (UID: \"c096740b-e5ec-44ee-aba1-16567d10dd18\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.573342 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mqn2\" (UniqueName: \"kubernetes.io/projected/c096740b-e5ec-44ee-aba1-16567d10dd18-kube-api-access-7mqn2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4\" (UID: \"c096740b-e5ec-44ee-aba1-16567d10dd18\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.573446 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c096740b-e5ec-44ee-aba1-16567d10dd18-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4\" (UID: \"c096740b-e5ec-44ee-aba1-16567d10dd18\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.675362 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c096740b-e5ec-44ee-aba1-16567d10dd18-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4\" (UID: \"c096740b-e5ec-44ee-aba1-16567d10dd18\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.675478 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mqn2\" (UniqueName: \"kubernetes.io/projected/c096740b-e5ec-44ee-aba1-16567d10dd18-kube-api-access-7mqn2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4\" (UID: \"c096740b-e5ec-44ee-aba1-16567d10dd18\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.675654 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c096740b-e5ec-44ee-aba1-16567d10dd18-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4\" (UID: \"c096740b-e5ec-44ee-aba1-16567d10dd18\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.680797 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c096740b-e5ec-44ee-aba1-16567d10dd18-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4\" (UID: \"c096740b-e5ec-44ee-aba1-16567d10dd18\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.685010 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c096740b-e5ec-44ee-aba1-16567d10dd18-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4\" (UID: \"c096740b-e5ec-44ee-aba1-16567d10dd18\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.700469 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mqn2\" (UniqueName: \"kubernetes.io/projected/c096740b-e5ec-44ee-aba1-16567d10dd18-kube-api-access-7mqn2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4\" (UID: \"c096740b-e5ec-44ee-aba1-16567d10dd18\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4" Jan 23 07:09:16 crc kubenswrapper[4937]: I0123 07:09:16.712802 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4" Jan 23 07:09:17 crc kubenswrapper[4937]: I0123 07:09:17.637202 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4"] Jan 23 07:09:18 crc kubenswrapper[4937]: I0123 07:09:18.549234 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4" event={"ID":"c096740b-e5ec-44ee-aba1-16567d10dd18","Type":"ContainerStarted","Data":"faf9a46f58f428f58cf287cee92c4a53fe330b11ed158b6b421249f098f5715b"} Jan 23 07:09:18 crc kubenswrapper[4937]: I0123 07:09:18.549886 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4" event={"ID":"c096740b-e5ec-44ee-aba1-16567d10dd18","Type":"ContainerStarted","Data":"a13b1a1eb70f2b643383d7fd40c436302f66c2cc920accadc1435a1aaa949797"} Jan 23 07:09:18 crc kubenswrapper[4937]: I0123 07:09:18.560240 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4" podStartSLOduration=1.9692701019999999 podStartE2EDuration="2.560217307s" podCreationTimestamp="2026-01-23 07:09:16 +0000 UTC" firstStartedPulling="2026-01-23 07:09:17.634586303 +0000 UTC m=+2157.438352956" lastFinishedPulling="2026-01-23 07:09:18.225533508 +0000 UTC m=+2158.029300161" observedRunningTime="2026-01-23 07:09:18.548023036 +0000 UTC m=+2158.351789689" watchObservedRunningTime="2026-01-23 07:09:18.560217307 +0000 UTC m=+2158.363983970" Jan 23 07:09:28 crc kubenswrapper[4937]: I0123 07:09:28.618138 4937 generic.go:334] "Generic (PLEG): container finished" podID="c096740b-e5ec-44ee-aba1-16567d10dd18" containerID="faf9a46f58f428f58cf287cee92c4a53fe330b11ed158b6b421249f098f5715b" exitCode=0 Jan 23 07:09:28 crc kubenswrapper[4937]: I0123 07:09:28.618239 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4" event={"ID":"c096740b-e5ec-44ee-aba1-16567d10dd18","Type":"ContainerDied","Data":"faf9a46f58f428f58cf287cee92c4a53fe330b11ed158b6b421249f098f5715b"} Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.082214 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.179378 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c096740b-e5ec-44ee-aba1-16567d10dd18-inventory\") pod \"c096740b-e5ec-44ee-aba1-16567d10dd18\" (UID: \"c096740b-e5ec-44ee-aba1-16567d10dd18\") " Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.179808 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mqn2\" (UniqueName: \"kubernetes.io/projected/c096740b-e5ec-44ee-aba1-16567d10dd18-kube-api-access-7mqn2\") pod \"c096740b-e5ec-44ee-aba1-16567d10dd18\" (UID: \"c096740b-e5ec-44ee-aba1-16567d10dd18\") " Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.180037 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c096740b-e5ec-44ee-aba1-16567d10dd18-ssh-key-openstack-edpm-ipam\") pod \"c096740b-e5ec-44ee-aba1-16567d10dd18\" (UID: \"c096740b-e5ec-44ee-aba1-16567d10dd18\") " Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.187224 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c096740b-e5ec-44ee-aba1-16567d10dd18-kube-api-access-7mqn2" (OuterVolumeSpecName: "kube-api-access-7mqn2") pod "c096740b-e5ec-44ee-aba1-16567d10dd18" (UID: "c096740b-e5ec-44ee-aba1-16567d10dd18"). InnerVolumeSpecName "kube-api-access-7mqn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.213504 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c096740b-e5ec-44ee-aba1-16567d10dd18-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c096740b-e5ec-44ee-aba1-16567d10dd18" (UID: "c096740b-e5ec-44ee-aba1-16567d10dd18"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.230460 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c096740b-e5ec-44ee-aba1-16567d10dd18-inventory" (OuterVolumeSpecName: "inventory") pod "c096740b-e5ec-44ee-aba1-16567d10dd18" (UID: "c096740b-e5ec-44ee-aba1-16567d10dd18"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.281766 4937 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c096740b-e5ec-44ee-aba1-16567d10dd18-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.281793 4937 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c096740b-e5ec-44ee-aba1-16567d10dd18-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.281803 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mqn2\" (UniqueName: \"kubernetes.io/projected/c096740b-e5ec-44ee-aba1-16567d10dd18-kube-api-access-7mqn2\") on node \"crc\" DevicePath \"\"" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.640306 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4" event={"ID":"c096740b-e5ec-44ee-aba1-16567d10dd18","Type":"ContainerDied","Data":"a13b1a1eb70f2b643383d7fd40c436302f66c2cc920accadc1435a1aaa949797"} Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.640357 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a13b1a1eb70f2b643383d7fd40c436302f66c2cc920accadc1435a1aaa949797" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.640428 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.731011 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4"] Jan 23 07:09:30 crc kubenswrapper[4937]: E0123 07:09:30.731447 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c096740b-e5ec-44ee-aba1-16567d10dd18" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.731471 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="c096740b-e5ec-44ee-aba1-16567d10dd18" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.731740 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="c096740b-e5ec-44ee-aba1-16567d10dd18" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.732639 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.734498 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.737192 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.737697 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.737866 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.738020 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.738216 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rm45q" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.738408 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.746265 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.746808 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4"] Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.790237 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.790288 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.790317 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.790345 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.790372 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.790441 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.790465 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.790501 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.790803 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.790888 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx5vc\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-kube-api-access-gx5vc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.791033 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.791150 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.791242 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.791326 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.892919 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.893223 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx5vc\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-kube-api-access-gx5vc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.893278 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.893325 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.893360 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.893398 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.893415 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.893434 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.893456 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.893480 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.893502 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.893526 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.893548 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.893570 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.898886 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.898930 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.899186 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.899977 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.900096 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.900298 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.901705 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.902538 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.902763 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.908397 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.909922 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.910526 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.911561 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:30 crc kubenswrapper[4937]: I0123 07:09:30.915438 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx5vc\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-kube-api-access-gx5vc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:31 crc kubenswrapper[4937]: I0123 07:09:31.047882 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:09:31 crc kubenswrapper[4937]: I0123 07:09:31.591218 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4"] Jan 23 07:09:31 crc kubenswrapper[4937]: I0123 07:09:31.651077 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" event={"ID":"cf409357-277f-4efa-b697-1dda40e6db83","Type":"ContainerStarted","Data":"914c6fd537309da92aed535195794ee8f7786332411da306154b39a39245f461"} Jan 23 07:09:32 crc kubenswrapper[4937]: I0123 07:09:32.662431 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" event={"ID":"cf409357-277f-4efa-b697-1dda40e6db83","Type":"ContainerStarted","Data":"a6a53a371c9ee13265d66e0c89ac12a68368bd92ff01984cb174784a8a9f5751"} Jan 23 07:09:32 crc kubenswrapper[4937]: I0123 07:09:32.690221 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" podStartSLOduration=2.276890154 podStartE2EDuration="2.690200951s" podCreationTimestamp="2026-01-23 07:09:30 +0000 UTC" firstStartedPulling="2026-01-23 07:09:31.601156814 +0000 UTC m=+2171.404923477" lastFinishedPulling="2026-01-23 07:09:32.014467631 +0000 UTC m=+2171.818234274" observedRunningTime="2026-01-23 07:09:32.68244361 +0000 UTC m=+2172.486210303" watchObservedRunningTime="2026-01-23 07:09:32.690200951 +0000 UTC m=+2172.493967624" Jan 23 07:09:37 crc kubenswrapper[4937]: I0123 07:09:37.723541 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:09:37 crc kubenswrapper[4937]: I0123 07:09:37.724187 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:10:07 crc kubenswrapper[4937]: I0123 07:10:07.724302 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:10:07 crc kubenswrapper[4937]: I0123 07:10:07.725031 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:10:16 crc kubenswrapper[4937]: I0123 07:10:16.090045 4937 generic.go:334] "Generic (PLEG): container finished" podID="cf409357-277f-4efa-b697-1dda40e6db83" containerID="a6a53a371c9ee13265d66e0c89ac12a68368bd92ff01984cb174784a8a9f5751" exitCode=0 Jan 23 07:10:16 crc kubenswrapper[4937]: I0123 07:10:16.090666 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" event={"ID":"cf409357-277f-4efa-b697-1dda40e6db83","Type":"ContainerDied","Data":"a6a53a371c9ee13265d66e0c89ac12a68368bd92ff01984cb174784a8a9f5751"} Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.523963 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.652115 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-ssh-key-openstack-edpm-ipam\") pod \"cf409357-277f-4efa-b697-1dda40e6db83\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.652558 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-ovn-default-certs-0\") pod \"cf409357-277f-4efa-b697-1dda40e6db83\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.652583 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-repo-setup-combined-ca-bundle\") pod \"cf409357-277f-4efa-b697-1dda40e6db83\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.652696 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-telemetry-combined-ca-bundle\") pod \"cf409357-277f-4efa-b697-1dda40e6db83\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.652757 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-bootstrap-combined-ca-bundle\") pod \"cf409357-277f-4efa-b697-1dda40e6db83\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.652795 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx5vc\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-kube-api-access-gx5vc\") pod \"cf409357-277f-4efa-b697-1dda40e6db83\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.652825 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-neutron-metadata-combined-ca-bundle\") pod \"cf409357-277f-4efa-b697-1dda40e6db83\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.652915 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"cf409357-277f-4efa-b697-1dda40e6db83\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.652970 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-libvirt-combined-ca-bundle\") pod \"cf409357-277f-4efa-b697-1dda40e6db83\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.653000 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"cf409357-277f-4efa-b697-1dda40e6db83\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.653079 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-nova-combined-ca-bundle\") pod \"cf409357-277f-4efa-b697-1dda40e6db83\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.653104 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"cf409357-277f-4efa-b697-1dda40e6db83\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.653126 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-inventory\") pod \"cf409357-277f-4efa-b697-1dda40e6db83\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.653174 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-ovn-combined-ca-bundle\") pod \"cf409357-277f-4efa-b697-1dda40e6db83\" (UID: \"cf409357-277f-4efa-b697-1dda40e6db83\") " Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.659884 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "cf409357-277f-4efa-b697-1dda40e6db83" (UID: "cf409357-277f-4efa-b697-1dda40e6db83"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.661725 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-kube-api-access-gx5vc" (OuterVolumeSpecName: "kube-api-access-gx5vc") pod "cf409357-277f-4efa-b697-1dda40e6db83" (UID: "cf409357-277f-4efa-b697-1dda40e6db83"). InnerVolumeSpecName "kube-api-access-gx5vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.661855 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "cf409357-277f-4efa-b697-1dda40e6db83" (UID: "cf409357-277f-4efa-b697-1dda40e6db83"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.661874 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "cf409357-277f-4efa-b697-1dda40e6db83" (UID: "cf409357-277f-4efa-b697-1dda40e6db83"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.661893 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "cf409357-277f-4efa-b697-1dda40e6db83" (UID: "cf409357-277f-4efa-b697-1dda40e6db83"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.661939 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "cf409357-277f-4efa-b697-1dda40e6db83" (UID: "cf409357-277f-4efa-b697-1dda40e6db83"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.663267 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "cf409357-277f-4efa-b697-1dda40e6db83" (UID: "cf409357-277f-4efa-b697-1dda40e6db83"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.664499 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "cf409357-277f-4efa-b697-1dda40e6db83" (UID: "cf409357-277f-4efa-b697-1dda40e6db83"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.665512 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "cf409357-277f-4efa-b697-1dda40e6db83" (UID: "cf409357-277f-4efa-b697-1dda40e6db83"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.667211 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "cf409357-277f-4efa-b697-1dda40e6db83" (UID: "cf409357-277f-4efa-b697-1dda40e6db83"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.667951 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "cf409357-277f-4efa-b697-1dda40e6db83" (UID: "cf409357-277f-4efa-b697-1dda40e6db83"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.669216 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "cf409357-277f-4efa-b697-1dda40e6db83" (UID: "cf409357-277f-4efa-b697-1dda40e6db83"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.688823 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-inventory" (OuterVolumeSpecName: "inventory") pod "cf409357-277f-4efa-b697-1dda40e6db83" (UID: "cf409357-277f-4efa-b697-1dda40e6db83"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.692734 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cf409357-277f-4efa-b697-1dda40e6db83" (UID: "cf409357-277f-4efa-b697-1dda40e6db83"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.766897 4937 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.766936 4937 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.766950 4937 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.766958 4937 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.766967 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx5vc\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-kube-api-access-gx5vc\") on node \"crc\" DevicePath \"\"" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.766975 4937 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.766984 4937 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.766993 4937 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.767002 4937 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.767012 4937 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.767024 4937 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cf409357-277f-4efa-b697-1dda40e6db83-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.767035 4937 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.767044 4937 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:10:17 crc kubenswrapper[4937]: I0123 07:10:17.767054 4937 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf409357-277f-4efa-b697-1dda40e6db83-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.109812 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" event={"ID":"cf409357-277f-4efa-b697-1dda40e6db83","Type":"ContainerDied","Data":"914c6fd537309da92aed535195794ee8f7786332411da306154b39a39245f461"} Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.109859 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="914c6fd537309da92aed535195794ee8f7786332411da306154b39a39245f461" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.109959 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.316199 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7"] Jan 23 07:10:18 crc kubenswrapper[4937]: E0123 07:10:18.317093 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf409357-277f-4efa-b697-1dda40e6db83" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.317124 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf409357-277f-4efa-b697-1dda40e6db83" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.317441 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf409357-277f-4efa-b697-1dda40e6db83" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.318440 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.321471 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.321533 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.322563 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.323013 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rm45q" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.324961 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.331464 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7"] Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.483140 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11934762-47de-4ed2-8554-a88cf1f34532-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjfw7\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.483331 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11934762-47de-4ed2-8554-a88cf1f34532-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjfw7\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.483410 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljrcs\" (UniqueName: \"kubernetes.io/projected/11934762-47de-4ed2-8554-a88cf1f34532-kube-api-access-ljrcs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjfw7\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.483446 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11934762-47de-4ed2-8554-a88cf1f34532-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjfw7\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.483505 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11934762-47de-4ed2-8554-a88cf1f34532-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjfw7\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.586395 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11934762-47de-4ed2-8554-a88cf1f34532-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjfw7\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.586512 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljrcs\" (UniqueName: \"kubernetes.io/projected/11934762-47de-4ed2-8554-a88cf1f34532-kube-api-access-ljrcs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjfw7\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.586556 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11934762-47de-4ed2-8554-a88cf1f34532-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjfw7\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.586636 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11934762-47de-4ed2-8554-a88cf1f34532-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjfw7\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.586740 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11934762-47de-4ed2-8554-a88cf1f34532-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjfw7\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.588379 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11934762-47de-4ed2-8554-a88cf1f34532-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjfw7\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.594339 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11934762-47de-4ed2-8554-a88cf1f34532-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjfw7\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.595235 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11934762-47de-4ed2-8554-a88cf1f34532-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjfw7\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.598156 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11934762-47de-4ed2-8554-a88cf1f34532-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjfw7\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.607010 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljrcs\" (UniqueName: \"kubernetes.io/projected/11934762-47de-4ed2-8554-a88cf1f34532-kube-api-access-ljrcs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qjfw7\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" Jan 23 07:10:18 crc kubenswrapper[4937]: I0123 07:10:18.647237 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" Jan 23 07:10:19 crc kubenswrapper[4937]: I0123 07:10:19.177154 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7"] Jan 23 07:10:20 crc kubenswrapper[4937]: I0123 07:10:20.154692 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" event={"ID":"11934762-47de-4ed2-8554-a88cf1f34532","Type":"ContainerStarted","Data":"e7c8138689084b9f32d2a1139da8d4e593de53d899f088c2de268554dba6af8e"} Jan 23 07:10:21 crc kubenswrapper[4937]: I0123 07:10:21.167233 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" event={"ID":"11934762-47de-4ed2-8554-a88cf1f34532","Type":"ContainerStarted","Data":"44ad8f8d4b7f86453ded43dfb89b6d2a84d359d810ea51bcaea7b3602f814b5d"} Jan 23 07:10:21 crc kubenswrapper[4937]: I0123 07:10:21.186760 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" podStartSLOduration=2.317841859 podStartE2EDuration="3.18673803s" podCreationTimestamp="2026-01-23 07:10:18 +0000 UTC" firstStartedPulling="2026-01-23 07:10:19.186843322 +0000 UTC m=+2218.990609975" lastFinishedPulling="2026-01-23 07:10:20.055739483 +0000 UTC m=+2219.859506146" observedRunningTime="2026-01-23 07:10:21.183190874 +0000 UTC m=+2220.986957537" watchObservedRunningTime="2026-01-23 07:10:21.18673803 +0000 UTC m=+2220.990504693" Jan 23 07:10:37 crc kubenswrapper[4937]: I0123 07:10:37.723748 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:10:37 crc kubenswrapper[4937]: I0123 07:10:37.724394 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:10:37 crc kubenswrapper[4937]: I0123 07:10:37.724446 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 07:10:37 crc kubenswrapper[4937]: I0123 07:10:37.725362 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0522c574d47e4ac97cb11162b71392b0d5a0266310fe2f914c57e2ae4f267d43"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:10:37 crc kubenswrapper[4937]: I0123 07:10:37.725418 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://0522c574d47e4ac97cb11162b71392b0d5a0266310fe2f914c57e2ae4f267d43" gracePeriod=600 Jan 23 07:10:38 crc kubenswrapper[4937]: I0123 07:10:38.362345 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="0522c574d47e4ac97cb11162b71392b0d5a0266310fe2f914c57e2ae4f267d43" exitCode=0 Jan 23 07:10:38 crc kubenswrapper[4937]: I0123 07:10:38.362844 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"0522c574d47e4ac97cb11162b71392b0d5a0266310fe2f914c57e2ae4f267d43"} Jan 23 07:10:38 crc kubenswrapper[4937]: I0123 07:10:38.362959 4937 scope.go:117] "RemoveContainer" containerID="83b78f0b3a13bfa13d33c77d8eac1b2a369506762eb75fb2217fcb94b872911a" Jan 23 07:10:39 crc kubenswrapper[4937]: I0123 07:10:39.376584 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf"} Jan 23 07:11:34 crc kubenswrapper[4937]: I0123 07:11:34.371248 4937 generic.go:334] "Generic (PLEG): container finished" podID="11934762-47de-4ed2-8554-a88cf1f34532" containerID="44ad8f8d4b7f86453ded43dfb89b6d2a84d359d810ea51bcaea7b3602f814b5d" exitCode=0 Jan 23 07:11:34 crc kubenswrapper[4937]: I0123 07:11:34.371555 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" event={"ID":"11934762-47de-4ed2-8554-a88cf1f34532","Type":"ContainerDied","Data":"44ad8f8d4b7f86453ded43dfb89b6d2a84d359d810ea51bcaea7b3602f814b5d"} Jan 23 07:11:35 crc kubenswrapper[4937]: I0123 07:11:35.801116 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" Jan 23 07:11:35 crc kubenswrapper[4937]: I0123 07:11:35.980775 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11934762-47de-4ed2-8554-a88cf1f34532-ssh-key-openstack-edpm-ipam\") pod \"11934762-47de-4ed2-8554-a88cf1f34532\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " Jan 23 07:11:35 crc kubenswrapper[4937]: I0123 07:11:35.981035 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11934762-47de-4ed2-8554-a88cf1f34532-ovncontroller-config-0\") pod \"11934762-47de-4ed2-8554-a88cf1f34532\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " Jan 23 07:11:35 crc kubenswrapper[4937]: I0123 07:11:35.981140 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljrcs\" (UniqueName: \"kubernetes.io/projected/11934762-47de-4ed2-8554-a88cf1f34532-kube-api-access-ljrcs\") pod \"11934762-47de-4ed2-8554-a88cf1f34532\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " Jan 23 07:11:35 crc kubenswrapper[4937]: I0123 07:11:35.981296 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11934762-47de-4ed2-8554-a88cf1f34532-inventory\") pod \"11934762-47de-4ed2-8554-a88cf1f34532\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " Jan 23 07:11:35 crc kubenswrapper[4937]: I0123 07:11:35.981319 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11934762-47de-4ed2-8554-a88cf1f34532-ovn-combined-ca-bundle\") pod \"11934762-47de-4ed2-8554-a88cf1f34532\" (UID: \"11934762-47de-4ed2-8554-a88cf1f34532\") " Jan 23 07:11:35 crc kubenswrapper[4937]: I0123 07:11:35.986693 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11934762-47de-4ed2-8554-a88cf1f34532-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "11934762-47de-4ed2-8554-a88cf1f34532" (UID: "11934762-47de-4ed2-8554-a88cf1f34532"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:11:35 crc kubenswrapper[4937]: I0123 07:11:35.987411 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11934762-47de-4ed2-8554-a88cf1f34532-kube-api-access-ljrcs" (OuterVolumeSpecName: "kube-api-access-ljrcs") pod "11934762-47de-4ed2-8554-a88cf1f34532" (UID: "11934762-47de-4ed2-8554-a88cf1f34532"). InnerVolumeSpecName "kube-api-access-ljrcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.014868 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11934762-47de-4ed2-8554-a88cf1f34532-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "11934762-47de-4ed2-8554-a88cf1f34532" (UID: "11934762-47de-4ed2-8554-a88cf1f34532"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.017040 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11934762-47de-4ed2-8554-a88cf1f34532-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "11934762-47de-4ed2-8554-a88cf1f34532" (UID: "11934762-47de-4ed2-8554-a88cf1f34532"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.017140 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11934762-47de-4ed2-8554-a88cf1f34532-inventory" (OuterVolumeSpecName: "inventory") pod "11934762-47de-4ed2-8554-a88cf1f34532" (UID: "11934762-47de-4ed2-8554-a88cf1f34532"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.083781 4937 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11934762-47de-4ed2-8554-a88cf1f34532-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.083817 4937 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11934762-47de-4ed2-8554-a88cf1f34532-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.083831 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljrcs\" (UniqueName: \"kubernetes.io/projected/11934762-47de-4ed2-8554-a88cf1f34532-kube-api-access-ljrcs\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.083845 4937 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11934762-47de-4ed2-8554-a88cf1f34532-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.083857 4937 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11934762-47de-4ed2-8554-a88cf1f34532-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.388908 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" event={"ID":"11934762-47de-4ed2-8554-a88cf1f34532","Type":"ContainerDied","Data":"e7c8138689084b9f32d2a1139da8d4e593de53d899f088c2de268554dba6af8e"} Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.388950 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7c8138689084b9f32d2a1139da8d4e593de53d899f088c2de268554dba6af8e" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.389002 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qjfw7" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.491296 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn"] Jan 23 07:11:36 crc kubenswrapper[4937]: E0123 07:11:36.491818 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11934762-47de-4ed2-8554-a88cf1f34532" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.491839 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="11934762-47de-4ed2-8554-a88cf1f34532" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.492392 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="11934762-47de-4ed2-8554-a88cf1f34532" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.493494 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.496393 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.496619 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rm45q" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.496638 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.496690 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.496760 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.496837 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.518804 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn"] Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.592955 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.593377 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.593536 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.593606 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrhfh\" (UniqueName: \"kubernetes.io/projected/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-kube-api-access-qrhfh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.593680 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.593820 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.695280 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.695352 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.695872 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.696487 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.696692 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrhfh\" (UniqueName: \"kubernetes.io/projected/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-kube-api-access-qrhfh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.696782 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.698928 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.702186 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.702215 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.702765 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.704748 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.719635 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrhfh\" (UniqueName: \"kubernetes.io/projected/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-kube-api-access-qrhfh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:36 crc kubenswrapper[4937]: I0123 07:11:36.825447 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:11:37 crc kubenswrapper[4937]: I0123 07:11:37.339882 4937 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 07:11:37 crc kubenswrapper[4937]: I0123 07:11:37.345035 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn"] Jan 23 07:11:37 crc kubenswrapper[4937]: I0123 07:11:37.400072 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" event={"ID":"7a8db5a7-cbaf-4d33-84d8-de028d12baf7","Type":"ContainerStarted","Data":"ad7603061fafef9566cb8e6ef068308fe492e0d143b7af299d508c378d67be24"} Jan 23 07:11:38 crc kubenswrapper[4937]: I0123 07:11:38.412017 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" event={"ID":"7a8db5a7-cbaf-4d33-84d8-de028d12baf7","Type":"ContainerStarted","Data":"cd7b01db1421f8a60d54969b1bda26a4eaa2888ab45dc8acc65c30338e6c8f1d"} Jan 23 07:11:38 crc kubenswrapper[4937]: I0123 07:11:38.432816 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" podStartSLOduration=1.830285605 podStartE2EDuration="2.432794816s" podCreationTimestamp="2026-01-23 07:11:36 +0000 UTC" firstStartedPulling="2026-01-23 07:11:37.339642509 +0000 UTC m=+2297.143409162" lastFinishedPulling="2026-01-23 07:11:37.94215172 +0000 UTC m=+2297.745918373" observedRunningTime="2026-01-23 07:11:38.428647513 +0000 UTC m=+2298.232414166" watchObservedRunningTime="2026-01-23 07:11:38.432794816 +0000 UTC m=+2298.236561479" Jan 23 07:12:10 crc kubenswrapper[4937]: I0123 07:12:10.016802 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wmr2d"] Jan 23 07:12:10 crc kubenswrapper[4937]: I0123 07:12:10.020797 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmr2d" Jan 23 07:12:10 crc kubenswrapper[4937]: I0123 07:12:10.030839 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wmr2d"] Jan 23 07:12:10 crc kubenswrapper[4937]: I0123 07:12:10.189815 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af913555-ad33-4780-a95e-2b6a71d67a10-catalog-content\") pod \"community-operators-wmr2d\" (UID: \"af913555-ad33-4780-a95e-2b6a71d67a10\") " pod="openshift-marketplace/community-operators-wmr2d" Jan 23 07:12:10 crc kubenswrapper[4937]: I0123 07:12:10.189979 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf7wr\" (UniqueName: \"kubernetes.io/projected/af913555-ad33-4780-a95e-2b6a71d67a10-kube-api-access-xf7wr\") pod \"community-operators-wmr2d\" (UID: \"af913555-ad33-4780-a95e-2b6a71d67a10\") " pod="openshift-marketplace/community-operators-wmr2d" Jan 23 07:12:10 crc kubenswrapper[4937]: I0123 07:12:10.190073 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af913555-ad33-4780-a95e-2b6a71d67a10-utilities\") pod \"community-operators-wmr2d\" (UID: \"af913555-ad33-4780-a95e-2b6a71d67a10\") " pod="openshift-marketplace/community-operators-wmr2d" Jan 23 07:12:10 crc kubenswrapper[4937]: I0123 07:12:10.292112 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af913555-ad33-4780-a95e-2b6a71d67a10-utilities\") pod \"community-operators-wmr2d\" (UID: \"af913555-ad33-4780-a95e-2b6a71d67a10\") " pod="openshift-marketplace/community-operators-wmr2d" Jan 23 07:12:10 crc kubenswrapper[4937]: I0123 07:12:10.292242 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af913555-ad33-4780-a95e-2b6a71d67a10-catalog-content\") pod \"community-operators-wmr2d\" (UID: \"af913555-ad33-4780-a95e-2b6a71d67a10\") " pod="openshift-marketplace/community-operators-wmr2d" Jan 23 07:12:10 crc kubenswrapper[4937]: I0123 07:12:10.292338 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf7wr\" (UniqueName: \"kubernetes.io/projected/af913555-ad33-4780-a95e-2b6a71d67a10-kube-api-access-xf7wr\") pod \"community-operators-wmr2d\" (UID: \"af913555-ad33-4780-a95e-2b6a71d67a10\") " pod="openshift-marketplace/community-operators-wmr2d" Jan 23 07:12:10 crc kubenswrapper[4937]: I0123 07:12:10.293177 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af913555-ad33-4780-a95e-2b6a71d67a10-utilities\") pod \"community-operators-wmr2d\" (UID: \"af913555-ad33-4780-a95e-2b6a71d67a10\") " pod="openshift-marketplace/community-operators-wmr2d" Jan 23 07:12:10 crc kubenswrapper[4937]: I0123 07:12:10.293226 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af913555-ad33-4780-a95e-2b6a71d67a10-catalog-content\") pod \"community-operators-wmr2d\" (UID: \"af913555-ad33-4780-a95e-2b6a71d67a10\") " pod="openshift-marketplace/community-operators-wmr2d" Jan 23 07:12:10 crc kubenswrapper[4937]: I0123 07:12:10.314343 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf7wr\" (UniqueName: \"kubernetes.io/projected/af913555-ad33-4780-a95e-2b6a71d67a10-kube-api-access-xf7wr\") pod \"community-operators-wmr2d\" (UID: \"af913555-ad33-4780-a95e-2b6a71d67a10\") " pod="openshift-marketplace/community-operators-wmr2d" Jan 23 07:12:10 crc kubenswrapper[4937]: I0123 07:12:10.354534 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmr2d" Jan 23 07:12:10 crc kubenswrapper[4937]: I0123 07:12:10.953619 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wmr2d"] Jan 23 07:12:11 crc kubenswrapper[4937]: I0123 07:12:11.698868 4937 generic.go:334] "Generic (PLEG): container finished" podID="af913555-ad33-4780-a95e-2b6a71d67a10" containerID="5fdf5535a0af50d5aa00098ebd928ac690b59f7a0f39e65cdab7624e45696f43" exitCode=0 Jan 23 07:12:11 crc kubenswrapper[4937]: I0123 07:12:11.698926 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmr2d" event={"ID":"af913555-ad33-4780-a95e-2b6a71d67a10","Type":"ContainerDied","Data":"5fdf5535a0af50d5aa00098ebd928ac690b59f7a0f39e65cdab7624e45696f43"} Jan 23 07:12:11 crc kubenswrapper[4937]: I0123 07:12:11.698979 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmr2d" event={"ID":"af913555-ad33-4780-a95e-2b6a71d67a10","Type":"ContainerStarted","Data":"90a06235b2da626f985ca8a7d2f82f388f78a154f2c33b374b2e0fac1a4d46b3"} Jan 23 07:12:13 crc kubenswrapper[4937]: I0123 07:12:13.719876 4937 generic.go:334] "Generic (PLEG): container finished" podID="af913555-ad33-4780-a95e-2b6a71d67a10" containerID="e532d1c4015f2d7c56a2389740a6546d690d3a13cb30c9706719b9e58bb7b42e" exitCode=0 Jan 23 07:12:13 crc kubenswrapper[4937]: I0123 07:12:13.720412 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmr2d" event={"ID":"af913555-ad33-4780-a95e-2b6a71d67a10","Type":"ContainerDied","Data":"e532d1c4015f2d7c56a2389740a6546d690d3a13cb30c9706719b9e58bb7b42e"} Jan 23 07:12:14 crc kubenswrapper[4937]: I0123 07:12:14.731348 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmr2d" event={"ID":"af913555-ad33-4780-a95e-2b6a71d67a10","Type":"ContainerStarted","Data":"f0a80e9cc9caadd14529b9f6b45cfd12f6ef5da204d94ceaf007b76b02f9c2cf"} Jan 23 07:12:14 crc kubenswrapper[4937]: I0123 07:12:14.758334 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wmr2d" podStartSLOduration=3.3287525909999998 podStartE2EDuration="5.758307831s" podCreationTimestamp="2026-01-23 07:12:09 +0000 UTC" firstStartedPulling="2026-01-23 07:12:11.703698878 +0000 UTC m=+2331.507465541" lastFinishedPulling="2026-01-23 07:12:14.133254128 +0000 UTC m=+2333.937020781" observedRunningTime="2026-01-23 07:12:14.752550915 +0000 UTC m=+2334.556317588" watchObservedRunningTime="2026-01-23 07:12:14.758307831 +0000 UTC m=+2334.562074504" Jan 23 07:12:20 crc kubenswrapper[4937]: I0123 07:12:20.355620 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wmr2d" Jan 23 07:12:20 crc kubenswrapper[4937]: I0123 07:12:20.356213 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wmr2d" Jan 23 07:12:20 crc kubenswrapper[4937]: I0123 07:12:20.409771 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wmr2d" Jan 23 07:12:20 crc kubenswrapper[4937]: I0123 07:12:20.828320 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wmr2d" Jan 23 07:12:20 crc kubenswrapper[4937]: I0123 07:12:20.878321 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wmr2d"] Jan 23 07:12:22 crc kubenswrapper[4937]: I0123 07:12:22.802962 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wmr2d" podUID="af913555-ad33-4780-a95e-2b6a71d67a10" containerName="registry-server" containerID="cri-o://f0a80e9cc9caadd14529b9f6b45cfd12f6ef5da204d94ceaf007b76b02f9c2cf" gracePeriod=2 Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.284021 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmr2d" Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.474061 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af913555-ad33-4780-a95e-2b6a71d67a10-catalog-content\") pod \"af913555-ad33-4780-a95e-2b6a71d67a10\" (UID: \"af913555-ad33-4780-a95e-2b6a71d67a10\") " Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.474149 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af913555-ad33-4780-a95e-2b6a71d67a10-utilities\") pod \"af913555-ad33-4780-a95e-2b6a71d67a10\" (UID: \"af913555-ad33-4780-a95e-2b6a71d67a10\") " Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.474318 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf7wr\" (UniqueName: \"kubernetes.io/projected/af913555-ad33-4780-a95e-2b6a71d67a10-kube-api-access-xf7wr\") pod \"af913555-ad33-4780-a95e-2b6a71d67a10\" (UID: \"af913555-ad33-4780-a95e-2b6a71d67a10\") " Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.475177 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af913555-ad33-4780-a95e-2b6a71d67a10-utilities" (OuterVolumeSpecName: "utilities") pod "af913555-ad33-4780-a95e-2b6a71d67a10" (UID: "af913555-ad33-4780-a95e-2b6a71d67a10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.479724 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af913555-ad33-4780-a95e-2b6a71d67a10-kube-api-access-xf7wr" (OuterVolumeSpecName: "kube-api-access-xf7wr") pod "af913555-ad33-4780-a95e-2b6a71d67a10" (UID: "af913555-ad33-4780-a95e-2b6a71d67a10"). InnerVolumeSpecName "kube-api-access-xf7wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.529097 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af913555-ad33-4780-a95e-2b6a71d67a10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af913555-ad33-4780-a95e-2b6a71d67a10" (UID: "af913555-ad33-4780-a95e-2b6a71d67a10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.576373 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf7wr\" (UniqueName: \"kubernetes.io/projected/af913555-ad33-4780-a95e-2b6a71d67a10-kube-api-access-xf7wr\") on node \"crc\" DevicePath \"\"" Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.576432 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af913555-ad33-4780-a95e-2b6a71d67a10-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.576443 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af913555-ad33-4780-a95e-2b6a71d67a10-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.814695 4937 generic.go:334] "Generic (PLEG): container finished" podID="af913555-ad33-4780-a95e-2b6a71d67a10" containerID="f0a80e9cc9caadd14529b9f6b45cfd12f6ef5da204d94ceaf007b76b02f9c2cf" exitCode=0 Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.814750 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmr2d" event={"ID":"af913555-ad33-4780-a95e-2b6a71d67a10","Type":"ContainerDied","Data":"f0a80e9cc9caadd14529b9f6b45cfd12f6ef5da204d94ceaf007b76b02f9c2cf"} Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.814771 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmr2d" Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.814792 4937 scope.go:117] "RemoveContainer" containerID="f0a80e9cc9caadd14529b9f6b45cfd12f6ef5da204d94ceaf007b76b02f9c2cf" Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.814779 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmr2d" event={"ID":"af913555-ad33-4780-a95e-2b6a71d67a10","Type":"ContainerDied","Data":"90a06235b2da626f985ca8a7d2f82f388f78a154f2c33b374b2e0fac1a4d46b3"} Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.835519 4937 scope.go:117] "RemoveContainer" containerID="e532d1c4015f2d7c56a2389740a6546d690d3a13cb30c9706719b9e58bb7b42e" Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.850127 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wmr2d"] Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.860780 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wmr2d"] Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.886465 4937 scope.go:117] "RemoveContainer" containerID="5fdf5535a0af50d5aa00098ebd928ac690b59f7a0f39e65cdab7624e45696f43" Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.911646 4937 scope.go:117] "RemoveContainer" containerID="f0a80e9cc9caadd14529b9f6b45cfd12f6ef5da204d94ceaf007b76b02f9c2cf" Jan 23 07:12:23 crc kubenswrapper[4937]: E0123 07:12:23.912098 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0a80e9cc9caadd14529b9f6b45cfd12f6ef5da204d94ceaf007b76b02f9c2cf\": container with ID starting with f0a80e9cc9caadd14529b9f6b45cfd12f6ef5da204d94ceaf007b76b02f9c2cf not found: ID does not exist" containerID="f0a80e9cc9caadd14529b9f6b45cfd12f6ef5da204d94ceaf007b76b02f9c2cf" Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.912147 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0a80e9cc9caadd14529b9f6b45cfd12f6ef5da204d94ceaf007b76b02f9c2cf"} err="failed to get container status \"f0a80e9cc9caadd14529b9f6b45cfd12f6ef5da204d94ceaf007b76b02f9c2cf\": rpc error: code = NotFound desc = could not find container \"f0a80e9cc9caadd14529b9f6b45cfd12f6ef5da204d94ceaf007b76b02f9c2cf\": container with ID starting with f0a80e9cc9caadd14529b9f6b45cfd12f6ef5da204d94ceaf007b76b02f9c2cf not found: ID does not exist" Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.912177 4937 scope.go:117] "RemoveContainer" containerID="e532d1c4015f2d7c56a2389740a6546d690d3a13cb30c9706719b9e58bb7b42e" Jan 23 07:12:23 crc kubenswrapper[4937]: E0123 07:12:23.912641 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e532d1c4015f2d7c56a2389740a6546d690d3a13cb30c9706719b9e58bb7b42e\": container with ID starting with e532d1c4015f2d7c56a2389740a6546d690d3a13cb30c9706719b9e58bb7b42e not found: ID does not exist" containerID="e532d1c4015f2d7c56a2389740a6546d690d3a13cb30c9706719b9e58bb7b42e" Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.912684 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e532d1c4015f2d7c56a2389740a6546d690d3a13cb30c9706719b9e58bb7b42e"} err="failed to get container status \"e532d1c4015f2d7c56a2389740a6546d690d3a13cb30c9706719b9e58bb7b42e\": rpc error: code = NotFound desc = could not find container \"e532d1c4015f2d7c56a2389740a6546d690d3a13cb30c9706719b9e58bb7b42e\": container with ID starting with e532d1c4015f2d7c56a2389740a6546d690d3a13cb30c9706719b9e58bb7b42e not found: ID does not exist" Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.912712 4937 scope.go:117] "RemoveContainer" containerID="5fdf5535a0af50d5aa00098ebd928ac690b59f7a0f39e65cdab7624e45696f43" Jan 23 07:12:23 crc kubenswrapper[4937]: E0123 07:12:23.913100 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fdf5535a0af50d5aa00098ebd928ac690b59f7a0f39e65cdab7624e45696f43\": container with ID starting with 5fdf5535a0af50d5aa00098ebd928ac690b59f7a0f39e65cdab7624e45696f43 not found: ID does not exist" containerID="5fdf5535a0af50d5aa00098ebd928ac690b59f7a0f39e65cdab7624e45696f43" Jan 23 07:12:23 crc kubenswrapper[4937]: I0123 07:12:23.913123 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fdf5535a0af50d5aa00098ebd928ac690b59f7a0f39e65cdab7624e45696f43"} err="failed to get container status \"5fdf5535a0af50d5aa00098ebd928ac690b59f7a0f39e65cdab7624e45696f43\": rpc error: code = NotFound desc = could not find container \"5fdf5535a0af50d5aa00098ebd928ac690b59f7a0f39e65cdab7624e45696f43\": container with ID starting with 5fdf5535a0af50d5aa00098ebd928ac690b59f7a0f39e65cdab7624e45696f43 not found: ID does not exist" Jan 23 07:12:24 crc kubenswrapper[4937]: I0123 07:12:24.537497 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af913555-ad33-4780-a95e-2b6a71d67a10" path="/var/lib/kubelet/pods/af913555-ad33-4780-a95e-2b6a71d67a10/volumes" Jan 23 07:12:37 crc kubenswrapper[4937]: I0123 07:12:37.958292 4937 generic.go:334] "Generic (PLEG): container finished" podID="7a8db5a7-cbaf-4d33-84d8-de028d12baf7" containerID="cd7b01db1421f8a60d54969b1bda26a4eaa2888ab45dc8acc65c30338e6c8f1d" exitCode=0 Jan 23 07:12:37 crc kubenswrapper[4937]: I0123 07:12:37.958374 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" event={"ID":"7a8db5a7-cbaf-4d33-84d8-de028d12baf7","Type":"ContainerDied","Data":"cd7b01db1421f8a60d54969b1bda26a4eaa2888ab45dc8acc65c30338e6c8f1d"} Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.435544 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.543058 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-neutron-metadata-combined-ca-bundle\") pod \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.543120 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrhfh\" (UniqueName: \"kubernetes.io/projected/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-kube-api-access-qrhfh\") pod \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.543306 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-nova-metadata-neutron-config-0\") pod \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.543358 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.543414 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-ssh-key-openstack-edpm-ipam\") pod \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.543464 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-inventory\") pod \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\" (UID: \"7a8db5a7-cbaf-4d33-84d8-de028d12baf7\") " Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.549971 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7a8db5a7-cbaf-4d33-84d8-de028d12baf7" (UID: "7a8db5a7-cbaf-4d33-84d8-de028d12baf7"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.550950 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-kube-api-access-qrhfh" (OuterVolumeSpecName: "kube-api-access-qrhfh") pod "7a8db5a7-cbaf-4d33-84d8-de028d12baf7" (UID: "7a8db5a7-cbaf-4d33-84d8-de028d12baf7"). InnerVolumeSpecName "kube-api-access-qrhfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.579450 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7a8db5a7-cbaf-4d33-84d8-de028d12baf7" (UID: "7a8db5a7-cbaf-4d33-84d8-de028d12baf7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.580558 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-inventory" (OuterVolumeSpecName: "inventory") pod "7a8db5a7-cbaf-4d33-84d8-de028d12baf7" (UID: "7a8db5a7-cbaf-4d33-84d8-de028d12baf7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.581960 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "7a8db5a7-cbaf-4d33-84d8-de028d12baf7" (UID: "7a8db5a7-cbaf-4d33-84d8-de028d12baf7"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.582504 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "7a8db5a7-cbaf-4d33-84d8-de028d12baf7" (UID: "7a8db5a7-cbaf-4d33-84d8-de028d12baf7"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.645808 4937 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.645858 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrhfh\" (UniqueName: \"kubernetes.io/projected/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-kube-api-access-qrhfh\") on node \"crc\" DevicePath \"\"" Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.645869 4937 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.645881 4937 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.645891 4937 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.645904 4937 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a8db5a7-cbaf-4d33-84d8-de028d12baf7-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.986298 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" event={"ID":"7a8db5a7-cbaf-4d33-84d8-de028d12baf7","Type":"ContainerDied","Data":"ad7603061fafef9566cb8e6ef068308fe492e0d143b7af299d508c378d67be24"} Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.986365 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad7603061fafef9566cb8e6ef068308fe492e0d143b7af299d508c378d67be24" Jan 23 07:12:39 crc kubenswrapper[4937]: I0123 07:12:39.986496 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.230065 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr"] Jan 23 07:12:40 crc kubenswrapper[4937]: E0123 07:12:40.254052 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af913555-ad33-4780-a95e-2b6a71d67a10" containerName="extract-content" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.254092 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="af913555-ad33-4780-a95e-2b6a71d67a10" containerName="extract-content" Jan 23 07:12:40 crc kubenswrapper[4937]: E0123 07:12:40.254117 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af913555-ad33-4780-a95e-2b6a71d67a10" containerName="extract-utilities" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.254126 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="af913555-ad33-4780-a95e-2b6a71d67a10" containerName="extract-utilities" Jan 23 07:12:40 crc kubenswrapper[4937]: E0123 07:12:40.254149 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a8db5a7-cbaf-4d33-84d8-de028d12baf7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.254157 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a8db5a7-cbaf-4d33-84d8-de028d12baf7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 07:12:40 crc kubenswrapper[4937]: E0123 07:12:40.254174 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af913555-ad33-4780-a95e-2b6a71d67a10" containerName="registry-server" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.254179 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="af913555-ad33-4780-a95e-2b6a71d67a10" containerName="registry-server" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.254359 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a8db5a7-cbaf-4d33-84d8-de028d12baf7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.254383 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="af913555-ad33-4780-a95e-2b6a71d67a10" containerName="registry-server" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.255050 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr"] Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.255137 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.257554 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.258200 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.258476 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.258655 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.260026 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rm45q" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.369236 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86xtr\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.369281 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86xtr\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.369319 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86xtr\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.369684 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86xtr\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.370041 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktx5r\" (UniqueName: \"kubernetes.io/projected/070d09b2-6b7b-4d86-976f-aafd5c706f42-kube-api-access-ktx5r\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86xtr\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.471551 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktx5r\" (UniqueName: \"kubernetes.io/projected/070d09b2-6b7b-4d86-976f-aafd5c706f42-kube-api-access-ktx5r\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86xtr\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.472161 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86xtr\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.472189 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86xtr\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.472223 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86xtr\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.472294 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86xtr\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.475694 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86xtr\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.475817 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86xtr\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.476202 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86xtr\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.476805 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86xtr\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.490225 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktx5r\" (UniqueName: \"kubernetes.io/projected/070d09b2-6b7b-4d86-976f-aafd5c706f42-kube-api-access-ktx5r\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-86xtr\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" Jan 23 07:12:40 crc kubenswrapper[4937]: I0123 07:12:40.593849 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" Jan 23 07:12:41 crc kubenswrapper[4937]: I0123 07:12:41.139374 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr"] Jan 23 07:12:42 crc kubenswrapper[4937]: I0123 07:12:42.006155 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" event={"ID":"070d09b2-6b7b-4d86-976f-aafd5c706f42","Type":"ContainerStarted","Data":"40191a075962ac799e90328ee243eaad0472ad6228fe9ae2af2dd8649ba3a48e"} Jan 23 07:12:42 crc kubenswrapper[4937]: I0123 07:12:42.006811 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" event={"ID":"070d09b2-6b7b-4d86-976f-aafd5c706f42","Type":"ContainerStarted","Data":"c9eddb24cd4420778d13f0d22eb67bb5e943bb8e7a43f2782789192605346c82"} Jan 23 07:12:42 crc kubenswrapper[4937]: I0123 07:12:42.032172 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" podStartSLOduration=1.51655568 podStartE2EDuration="2.032144536s" podCreationTimestamp="2026-01-23 07:12:40 +0000 UTC" firstStartedPulling="2026-01-23 07:12:41.154610524 +0000 UTC m=+2360.958377177" lastFinishedPulling="2026-01-23 07:12:41.67019937 +0000 UTC m=+2361.473966033" observedRunningTime="2026-01-23 07:12:42.023643814 +0000 UTC m=+2361.827410497" watchObservedRunningTime="2026-01-23 07:12:42.032144536 +0000 UTC m=+2361.835911219" Jan 23 07:13:07 crc kubenswrapper[4937]: I0123 07:13:07.723575 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:13:07 crc kubenswrapper[4937]: I0123 07:13:07.724095 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:13:37 crc kubenswrapper[4937]: I0123 07:13:37.724700 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:13:37 crc kubenswrapper[4937]: I0123 07:13:37.725310 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:13:59 crc kubenswrapper[4937]: I0123 07:13:59.957658 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mkj5q"] Jan 23 07:13:59 crc kubenswrapper[4937]: I0123 07:13:59.963082 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkj5q" Jan 23 07:13:59 crc kubenswrapper[4937]: I0123 07:13:59.981250 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkj5q"] Jan 23 07:14:00 crc kubenswrapper[4937]: I0123 07:14:00.022895 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf9q6\" (UniqueName: \"kubernetes.io/projected/514f124d-3165-43f1-9f8b-00bc8d620cbb-kube-api-access-gf9q6\") pod \"redhat-marketplace-mkj5q\" (UID: \"514f124d-3165-43f1-9f8b-00bc8d620cbb\") " pod="openshift-marketplace/redhat-marketplace-mkj5q" Jan 23 07:14:00 crc kubenswrapper[4937]: I0123 07:14:00.023004 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514f124d-3165-43f1-9f8b-00bc8d620cbb-utilities\") pod \"redhat-marketplace-mkj5q\" (UID: \"514f124d-3165-43f1-9f8b-00bc8d620cbb\") " pod="openshift-marketplace/redhat-marketplace-mkj5q" Jan 23 07:14:00 crc kubenswrapper[4937]: I0123 07:14:00.023031 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514f124d-3165-43f1-9f8b-00bc8d620cbb-catalog-content\") pod \"redhat-marketplace-mkj5q\" (UID: \"514f124d-3165-43f1-9f8b-00bc8d620cbb\") " pod="openshift-marketplace/redhat-marketplace-mkj5q" Jan 23 07:14:00 crc kubenswrapper[4937]: I0123 07:14:00.124350 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514f124d-3165-43f1-9f8b-00bc8d620cbb-utilities\") pod \"redhat-marketplace-mkj5q\" (UID: \"514f124d-3165-43f1-9f8b-00bc8d620cbb\") " pod="openshift-marketplace/redhat-marketplace-mkj5q" Jan 23 07:14:00 crc kubenswrapper[4937]: I0123 07:14:00.124692 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514f124d-3165-43f1-9f8b-00bc8d620cbb-catalog-content\") pod \"redhat-marketplace-mkj5q\" (UID: \"514f124d-3165-43f1-9f8b-00bc8d620cbb\") " pod="openshift-marketplace/redhat-marketplace-mkj5q" Jan 23 07:14:00 crc kubenswrapper[4937]: I0123 07:14:00.124900 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf9q6\" (UniqueName: \"kubernetes.io/projected/514f124d-3165-43f1-9f8b-00bc8d620cbb-kube-api-access-gf9q6\") pod \"redhat-marketplace-mkj5q\" (UID: \"514f124d-3165-43f1-9f8b-00bc8d620cbb\") " pod="openshift-marketplace/redhat-marketplace-mkj5q" Jan 23 07:14:00 crc kubenswrapper[4937]: I0123 07:14:00.125490 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514f124d-3165-43f1-9f8b-00bc8d620cbb-utilities\") pod \"redhat-marketplace-mkj5q\" (UID: \"514f124d-3165-43f1-9f8b-00bc8d620cbb\") " pod="openshift-marketplace/redhat-marketplace-mkj5q" Jan 23 07:14:00 crc kubenswrapper[4937]: I0123 07:14:00.125839 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514f124d-3165-43f1-9f8b-00bc8d620cbb-catalog-content\") pod \"redhat-marketplace-mkj5q\" (UID: \"514f124d-3165-43f1-9f8b-00bc8d620cbb\") " pod="openshift-marketplace/redhat-marketplace-mkj5q" Jan 23 07:14:00 crc kubenswrapper[4937]: I0123 07:14:00.146745 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf9q6\" (UniqueName: \"kubernetes.io/projected/514f124d-3165-43f1-9f8b-00bc8d620cbb-kube-api-access-gf9q6\") pod \"redhat-marketplace-mkj5q\" (UID: \"514f124d-3165-43f1-9f8b-00bc8d620cbb\") " pod="openshift-marketplace/redhat-marketplace-mkj5q" Jan 23 07:14:00 crc kubenswrapper[4937]: I0123 07:14:00.289804 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkj5q" Jan 23 07:14:00 crc kubenswrapper[4937]: I0123 07:14:00.770310 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkj5q"] Jan 23 07:14:00 crc kubenswrapper[4937]: I0123 07:14:00.782227 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkj5q" event={"ID":"514f124d-3165-43f1-9f8b-00bc8d620cbb","Type":"ContainerStarted","Data":"c375a9cf75b94984e7594d0e0837c7473e499bbe11e45f05fa8307eb15903011"} Jan 23 07:14:01 crc kubenswrapper[4937]: I0123 07:14:01.794992 4937 generic.go:334] "Generic (PLEG): container finished" podID="514f124d-3165-43f1-9f8b-00bc8d620cbb" containerID="15097a1c435ec5fca65f4c9bfec8fba2be8fc1ce21677243810fa70d2e7e0611" exitCode=0 Jan 23 07:14:01 crc kubenswrapper[4937]: I0123 07:14:01.795120 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkj5q" event={"ID":"514f124d-3165-43f1-9f8b-00bc8d620cbb","Type":"ContainerDied","Data":"15097a1c435ec5fca65f4c9bfec8fba2be8fc1ce21677243810fa70d2e7e0611"} Jan 23 07:14:02 crc kubenswrapper[4937]: I0123 07:14:02.807561 4937 generic.go:334] "Generic (PLEG): container finished" podID="514f124d-3165-43f1-9f8b-00bc8d620cbb" containerID="41c699be2d462d74958d1e00609c3d5c9d8186d8e3594bd92f3c01ae9e2b7724" exitCode=0 Jan 23 07:14:02 crc kubenswrapper[4937]: I0123 07:14:02.807652 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkj5q" event={"ID":"514f124d-3165-43f1-9f8b-00bc8d620cbb","Type":"ContainerDied","Data":"41c699be2d462d74958d1e00609c3d5c9d8186d8e3594bd92f3c01ae9e2b7724"} Jan 23 07:14:03 crc kubenswrapper[4937]: I0123 07:14:03.819109 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkj5q" event={"ID":"514f124d-3165-43f1-9f8b-00bc8d620cbb","Type":"ContainerStarted","Data":"17398a50c59abfcf49fad12c03208ad62366dbf5e794c0b6367c94141a81cbaa"} Jan 23 07:14:03 crc kubenswrapper[4937]: I0123 07:14:03.844924 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mkj5q" podStartSLOduration=3.401414115 podStartE2EDuration="4.844904852s" podCreationTimestamp="2026-01-23 07:13:59 +0000 UTC" firstStartedPulling="2026-01-23 07:14:01.797135298 +0000 UTC m=+2441.600901951" lastFinishedPulling="2026-01-23 07:14:03.240626035 +0000 UTC m=+2443.044392688" observedRunningTime="2026-01-23 07:14:03.833508242 +0000 UTC m=+2443.637274905" watchObservedRunningTime="2026-01-23 07:14:03.844904852 +0000 UTC m=+2443.648671515" Jan 23 07:14:07 crc kubenswrapper[4937]: I0123 07:14:07.724445 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:14:07 crc kubenswrapper[4937]: I0123 07:14:07.725423 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:14:07 crc kubenswrapper[4937]: I0123 07:14:07.725524 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 07:14:07 crc kubenswrapper[4937]: I0123 07:14:07.726937 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:14:07 crc kubenswrapper[4937]: I0123 07:14:07.727092 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" gracePeriod=600 Jan 23 07:14:08 crc kubenswrapper[4937]: E0123 07:14:08.359569 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:14:08 crc kubenswrapper[4937]: I0123 07:14:08.870717 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" exitCode=0 Jan 23 07:14:08 crc kubenswrapper[4937]: I0123 07:14:08.870768 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf"} Jan 23 07:14:08 crc kubenswrapper[4937]: I0123 07:14:08.871928 4937 scope.go:117] "RemoveContainer" containerID="0522c574d47e4ac97cb11162b71392b0d5a0266310fe2f914c57e2ae4f267d43" Jan 23 07:14:08 crc kubenswrapper[4937]: I0123 07:14:08.873235 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:14:08 crc kubenswrapper[4937]: E0123 07:14:08.873805 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:14:10 crc kubenswrapper[4937]: I0123 07:14:10.290746 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mkj5q" Jan 23 07:14:10 crc kubenswrapper[4937]: I0123 07:14:10.291075 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mkj5q" Jan 23 07:14:10 crc kubenswrapper[4937]: I0123 07:14:10.337426 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mkj5q" Jan 23 07:14:10 crc kubenswrapper[4937]: I0123 07:14:10.989072 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mkj5q" Jan 23 07:14:11 crc kubenswrapper[4937]: I0123 07:14:11.037514 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkj5q"] Jan 23 07:14:12 crc kubenswrapper[4937]: I0123 07:14:12.918508 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mkj5q" podUID="514f124d-3165-43f1-9f8b-00bc8d620cbb" containerName="registry-server" containerID="cri-o://17398a50c59abfcf49fad12c03208ad62366dbf5e794c0b6367c94141a81cbaa" gracePeriod=2 Jan 23 07:14:13 crc kubenswrapper[4937]: I0123 07:14:13.868237 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkj5q" Jan 23 07:14:13 crc kubenswrapper[4937]: I0123 07:14:13.924420 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514f124d-3165-43f1-9f8b-00bc8d620cbb-utilities\") pod \"514f124d-3165-43f1-9f8b-00bc8d620cbb\" (UID: \"514f124d-3165-43f1-9f8b-00bc8d620cbb\") " Jan 23 07:14:13 crc kubenswrapper[4937]: I0123 07:14:13.924506 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf9q6\" (UniqueName: \"kubernetes.io/projected/514f124d-3165-43f1-9f8b-00bc8d620cbb-kube-api-access-gf9q6\") pod \"514f124d-3165-43f1-9f8b-00bc8d620cbb\" (UID: \"514f124d-3165-43f1-9f8b-00bc8d620cbb\") " Jan 23 07:14:13 crc kubenswrapper[4937]: I0123 07:14:13.927085 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514f124d-3165-43f1-9f8b-00bc8d620cbb-catalog-content\") pod \"514f124d-3165-43f1-9f8b-00bc8d620cbb\" (UID: \"514f124d-3165-43f1-9f8b-00bc8d620cbb\") " Jan 23 07:14:13 crc kubenswrapper[4937]: I0123 07:14:13.927463 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/514f124d-3165-43f1-9f8b-00bc8d620cbb-utilities" (OuterVolumeSpecName: "utilities") pod "514f124d-3165-43f1-9f8b-00bc8d620cbb" (UID: "514f124d-3165-43f1-9f8b-00bc8d620cbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:14:13 crc kubenswrapper[4937]: I0123 07:14:13.928006 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514f124d-3165-43f1-9f8b-00bc8d620cbb-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:14:13 crc kubenswrapper[4937]: I0123 07:14:13.936072 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/514f124d-3165-43f1-9f8b-00bc8d620cbb-kube-api-access-gf9q6" (OuterVolumeSpecName: "kube-api-access-gf9q6") pod "514f124d-3165-43f1-9f8b-00bc8d620cbb" (UID: "514f124d-3165-43f1-9f8b-00bc8d620cbb"). InnerVolumeSpecName "kube-api-access-gf9q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:14:13 crc kubenswrapper[4937]: I0123 07:14:13.936836 4937 generic.go:334] "Generic (PLEG): container finished" podID="514f124d-3165-43f1-9f8b-00bc8d620cbb" containerID="17398a50c59abfcf49fad12c03208ad62366dbf5e794c0b6367c94141a81cbaa" exitCode=0 Jan 23 07:14:13 crc kubenswrapper[4937]: I0123 07:14:13.936886 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkj5q" event={"ID":"514f124d-3165-43f1-9f8b-00bc8d620cbb","Type":"ContainerDied","Data":"17398a50c59abfcf49fad12c03208ad62366dbf5e794c0b6367c94141a81cbaa"} Jan 23 07:14:13 crc kubenswrapper[4937]: I0123 07:14:13.936921 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkj5q" event={"ID":"514f124d-3165-43f1-9f8b-00bc8d620cbb","Type":"ContainerDied","Data":"c375a9cf75b94984e7594d0e0837c7473e499bbe11e45f05fa8307eb15903011"} Jan 23 07:14:13 crc kubenswrapper[4937]: I0123 07:14:13.936943 4937 scope.go:117] "RemoveContainer" containerID="17398a50c59abfcf49fad12c03208ad62366dbf5e794c0b6367c94141a81cbaa" Jan 23 07:14:13 crc kubenswrapper[4937]: I0123 07:14:13.937135 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkj5q" Jan 23 07:14:13 crc kubenswrapper[4937]: I0123 07:14:13.953426 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/514f124d-3165-43f1-9f8b-00bc8d620cbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "514f124d-3165-43f1-9f8b-00bc8d620cbb" (UID: "514f124d-3165-43f1-9f8b-00bc8d620cbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:14:13 crc kubenswrapper[4937]: I0123 07:14:13.998003 4937 scope.go:117] "RemoveContainer" containerID="41c699be2d462d74958d1e00609c3d5c9d8186d8e3594bd92f3c01ae9e2b7724" Jan 23 07:14:14 crc kubenswrapper[4937]: I0123 07:14:14.018735 4937 scope.go:117] "RemoveContainer" containerID="15097a1c435ec5fca65f4c9bfec8fba2be8fc1ce21677243810fa70d2e7e0611" Jan 23 07:14:14 crc kubenswrapper[4937]: I0123 07:14:14.030104 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf9q6\" (UniqueName: \"kubernetes.io/projected/514f124d-3165-43f1-9f8b-00bc8d620cbb-kube-api-access-gf9q6\") on node \"crc\" DevicePath \"\"" Jan 23 07:14:14 crc kubenswrapper[4937]: I0123 07:14:14.030137 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514f124d-3165-43f1-9f8b-00bc8d620cbb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:14:14 crc kubenswrapper[4937]: I0123 07:14:14.062967 4937 scope.go:117] "RemoveContainer" containerID="17398a50c59abfcf49fad12c03208ad62366dbf5e794c0b6367c94141a81cbaa" Jan 23 07:14:14 crc kubenswrapper[4937]: E0123 07:14:14.064299 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17398a50c59abfcf49fad12c03208ad62366dbf5e794c0b6367c94141a81cbaa\": container with ID starting with 17398a50c59abfcf49fad12c03208ad62366dbf5e794c0b6367c94141a81cbaa not found: ID does not exist" containerID="17398a50c59abfcf49fad12c03208ad62366dbf5e794c0b6367c94141a81cbaa" Jan 23 07:14:14 crc kubenswrapper[4937]: I0123 07:14:14.064345 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17398a50c59abfcf49fad12c03208ad62366dbf5e794c0b6367c94141a81cbaa"} err="failed to get container status \"17398a50c59abfcf49fad12c03208ad62366dbf5e794c0b6367c94141a81cbaa\": rpc error: code = NotFound desc = could not find container \"17398a50c59abfcf49fad12c03208ad62366dbf5e794c0b6367c94141a81cbaa\": container with ID starting with 17398a50c59abfcf49fad12c03208ad62366dbf5e794c0b6367c94141a81cbaa not found: ID does not exist" Jan 23 07:14:14 crc kubenswrapper[4937]: I0123 07:14:14.064374 4937 scope.go:117] "RemoveContainer" containerID="41c699be2d462d74958d1e00609c3d5c9d8186d8e3594bd92f3c01ae9e2b7724" Jan 23 07:14:14 crc kubenswrapper[4937]: E0123 07:14:14.064744 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41c699be2d462d74958d1e00609c3d5c9d8186d8e3594bd92f3c01ae9e2b7724\": container with ID starting with 41c699be2d462d74958d1e00609c3d5c9d8186d8e3594bd92f3c01ae9e2b7724 not found: ID does not exist" containerID="41c699be2d462d74958d1e00609c3d5c9d8186d8e3594bd92f3c01ae9e2b7724" Jan 23 07:14:14 crc kubenswrapper[4937]: I0123 07:14:14.064768 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41c699be2d462d74958d1e00609c3d5c9d8186d8e3594bd92f3c01ae9e2b7724"} err="failed to get container status \"41c699be2d462d74958d1e00609c3d5c9d8186d8e3594bd92f3c01ae9e2b7724\": rpc error: code = NotFound desc = could not find container \"41c699be2d462d74958d1e00609c3d5c9d8186d8e3594bd92f3c01ae9e2b7724\": container with ID starting with 41c699be2d462d74958d1e00609c3d5c9d8186d8e3594bd92f3c01ae9e2b7724 not found: ID does not exist" Jan 23 07:14:14 crc kubenswrapper[4937]: I0123 07:14:14.064781 4937 scope.go:117] "RemoveContainer" containerID="15097a1c435ec5fca65f4c9bfec8fba2be8fc1ce21677243810fa70d2e7e0611" Jan 23 07:14:14 crc kubenswrapper[4937]: E0123 07:14:14.065146 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15097a1c435ec5fca65f4c9bfec8fba2be8fc1ce21677243810fa70d2e7e0611\": container with ID starting with 15097a1c435ec5fca65f4c9bfec8fba2be8fc1ce21677243810fa70d2e7e0611 not found: ID does not exist" containerID="15097a1c435ec5fca65f4c9bfec8fba2be8fc1ce21677243810fa70d2e7e0611" Jan 23 07:14:14 crc kubenswrapper[4937]: I0123 07:14:14.065205 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15097a1c435ec5fca65f4c9bfec8fba2be8fc1ce21677243810fa70d2e7e0611"} err="failed to get container status \"15097a1c435ec5fca65f4c9bfec8fba2be8fc1ce21677243810fa70d2e7e0611\": rpc error: code = NotFound desc = could not find container \"15097a1c435ec5fca65f4c9bfec8fba2be8fc1ce21677243810fa70d2e7e0611\": container with ID starting with 15097a1c435ec5fca65f4c9bfec8fba2be8fc1ce21677243810fa70d2e7e0611 not found: ID does not exist" Jan 23 07:14:14 crc kubenswrapper[4937]: I0123 07:14:14.293059 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkj5q"] Jan 23 07:14:14 crc kubenswrapper[4937]: I0123 07:14:14.301789 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkj5q"] Jan 23 07:14:14 crc kubenswrapper[4937]: I0123 07:14:14.538260 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="514f124d-3165-43f1-9f8b-00bc8d620cbb" path="/var/lib/kubelet/pods/514f124d-3165-43f1-9f8b-00bc8d620cbb/volumes" Jan 23 07:14:23 crc kubenswrapper[4937]: I0123 07:14:23.526453 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:14:23 crc kubenswrapper[4937]: E0123 07:14:23.527468 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:14:38 crc kubenswrapper[4937]: I0123 07:14:38.526611 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:14:38 crc kubenswrapper[4937]: E0123 07:14:38.527527 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:14:52 crc kubenswrapper[4937]: I0123 07:14:52.530078 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:14:52 crc kubenswrapper[4937]: E0123 07:14:52.531099 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.198662 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx"] Jan 23 07:15:00 crc kubenswrapper[4937]: E0123 07:15:00.200469 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="514f124d-3165-43f1-9f8b-00bc8d620cbb" containerName="extract-content" Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.200498 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="514f124d-3165-43f1-9f8b-00bc8d620cbb" containerName="extract-content" Jan 23 07:15:00 crc kubenswrapper[4937]: E0123 07:15:00.200531 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="514f124d-3165-43f1-9f8b-00bc8d620cbb" containerName="registry-server" Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.200540 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="514f124d-3165-43f1-9f8b-00bc8d620cbb" containerName="registry-server" Jan 23 07:15:00 crc kubenswrapper[4937]: E0123 07:15:00.200627 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="514f124d-3165-43f1-9f8b-00bc8d620cbb" containerName="extract-utilities" Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.200671 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="514f124d-3165-43f1-9f8b-00bc8d620cbb" containerName="extract-utilities" Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.201205 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="514f124d-3165-43f1-9f8b-00bc8d620cbb" containerName="registry-server" Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.203022 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx" Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.209909 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.216117 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.236781 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx"] Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.305523 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m92g4\" (UniqueName: \"kubernetes.io/projected/48b870fb-925f-4021-9ada-8977fd5b9d9c-kube-api-access-m92g4\") pod \"collect-profiles-29485875-r8zfx\" (UID: \"48b870fb-925f-4021-9ada-8977fd5b9d9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx" Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.305662 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48b870fb-925f-4021-9ada-8977fd5b9d9c-config-volume\") pod \"collect-profiles-29485875-r8zfx\" (UID: \"48b870fb-925f-4021-9ada-8977fd5b9d9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx" Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.305702 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/48b870fb-925f-4021-9ada-8977fd5b9d9c-secret-volume\") pod \"collect-profiles-29485875-r8zfx\" (UID: \"48b870fb-925f-4021-9ada-8977fd5b9d9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx" Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.407913 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m92g4\" (UniqueName: \"kubernetes.io/projected/48b870fb-925f-4021-9ada-8977fd5b9d9c-kube-api-access-m92g4\") pod \"collect-profiles-29485875-r8zfx\" (UID: \"48b870fb-925f-4021-9ada-8977fd5b9d9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx" Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.408068 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48b870fb-925f-4021-9ada-8977fd5b9d9c-config-volume\") pod \"collect-profiles-29485875-r8zfx\" (UID: \"48b870fb-925f-4021-9ada-8977fd5b9d9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx" Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.408123 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/48b870fb-925f-4021-9ada-8977fd5b9d9c-secret-volume\") pod \"collect-profiles-29485875-r8zfx\" (UID: \"48b870fb-925f-4021-9ada-8977fd5b9d9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx" Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.408986 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48b870fb-925f-4021-9ada-8977fd5b9d9c-config-volume\") pod \"collect-profiles-29485875-r8zfx\" (UID: \"48b870fb-925f-4021-9ada-8977fd5b9d9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx" Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.415369 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/48b870fb-925f-4021-9ada-8977fd5b9d9c-secret-volume\") pod \"collect-profiles-29485875-r8zfx\" (UID: \"48b870fb-925f-4021-9ada-8977fd5b9d9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx" Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.424554 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m92g4\" (UniqueName: \"kubernetes.io/projected/48b870fb-925f-4021-9ada-8977fd5b9d9c-kube-api-access-m92g4\") pod \"collect-profiles-29485875-r8zfx\" (UID: \"48b870fb-925f-4021-9ada-8977fd5b9d9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx" Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.546762 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx" Jan 23 07:15:00 crc kubenswrapper[4937]: I0123 07:15:00.993283 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx"] Jan 23 07:15:01 crc kubenswrapper[4937]: I0123 07:15:01.392422 4937 generic.go:334] "Generic (PLEG): container finished" podID="48b870fb-925f-4021-9ada-8977fd5b9d9c" containerID="450b7facb7cb8dd4e313fce3c2e0777efe4148078eb5b7c8d40bd30e12085368" exitCode=0 Jan 23 07:15:01 crc kubenswrapper[4937]: I0123 07:15:01.392539 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx" event={"ID":"48b870fb-925f-4021-9ada-8977fd5b9d9c","Type":"ContainerDied","Data":"450b7facb7cb8dd4e313fce3c2e0777efe4148078eb5b7c8d40bd30e12085368"} Jan 23 07:15:01 crc kubenswrapper[4937]: I0123 07:15:01.392675 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx" event={"ID":"48b870fb-925f-4021-9ada-8977fd5b9d9c","Type":"ContainerStarted","Data":"6ea4cf97849249ae0c69be4f96702f50fe57f98858513969caf15302988ccdb4"} Jan 23 07:15:02 crc kubenswrapper[4937]: I0123 07:15:02.726361 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx" Jan 23 07:15:02 crc kubenswrapper[4937]: I0123 07:15:02.862157 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m92g4\" (UniqueName: \"kubernetes.io/projected/48b870fb-925f-4021-9ada-8977fd5b9d9c-kube-api-access-m92g4\") pod \"48b870fb-925f-4021-9ada-8977fd5b9d9c\" (UID: \"48b870fb-925f-4021-9ada-8977fd5b9d9c\") " Jan 23 07:15:02 crc kubenswrapper[4937]: I0123 07:15:02.862208 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48b870fb-925f-4021-9ada-8977fd5b9d9c-config-volume\") pod \"48b870fb-925f-4021-9ada-8977fd5b9d9c\" (UID: \"48b870fb-925f-4021-9ada-8977fd5b9d9c\") " Jan 23 07:15:02 crc kubenswrapper[4937]: I0123 07:15:02.862273 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/48b870fb-925f-4021-9ada-8977fd5b9d9c-secret-volume\") pod \"48b870fb-925f-4021-9ada-8977fd5b9d9c\" (UID: \"48b870fb-925f-4021-9ada-8977fd5b9d9c\") " Jan 23 07:15:02 crc kubenswrapper[4937]: I0123 07:15:02.862788 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48b870fb-925f-4021-9ada-8977fd5b9d9c-config-volume" (OuterVolumeSpecName: "config-volume") pod "48b870fb-925f-4021-9ada-8977fd5b9d9c" (UID: "48b870fb-925f-4021-9ada-8977fd5b9d9c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:15:02 crc kubenswrapper[4937]: I0123 07:15:02.867657 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48b870fb-925f-4021-9ada-8977fd5b9d9c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "48b870fb-925f-4021-9ada-8977fd5b9d9c" (UID: "48b870fb-925f-4021-9ada-8977fd5b9d9c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:15:02 crc kubenswrapper[4937]: I0123 07:15:02.868896 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48b870fb-925f-4021-9ada-8977fd5b9d9c-kube-api-access-m92g4" (OuterVolumeSpecName: "kube-api-access-m92g4") pod "48b870fb-925f-4021-9ada-8977fd5b9d9c" (UID: "48b870fb-925f-4021-9ada-8977fd5b9d9c"). InnerVolumeSpecName "kube-api-access-m92g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:15:02 crc kubenswrapper[4937]: I0123 07:15:02.965074 4937 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/48b870fb-925f-4021-9ada-8977fd5b9d9c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 07:15:02 crc kubenswrapper[4937]: I0123 07:15:02.965112 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m92g4\" (UniqueName: \"kubernetes.io/projected/48b870fb-925f-4021-9ada-8977fd5b9d9c-kube-api-access-m92g4\") on node \"crc\" DevicePath \"\"" Jan 23 07:15:02 crc kubenswrapper[4937]: I0123 07:15:02.965121 4937 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48b870fb-925f-4021-9ada-8977fd5b9d9c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 07:15:03 crc kubenswrapper[4937]: I0123 07:15:03.413028 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx" event={"ID":"48b870fb-925f-4021-9ada-8977fd5b9d9c","Type":"ContainerDied","Data":"6ea4cf97849249ae0c69be4f96702f50fe57f98858513969caf15302988ccdb4"} Jan 23 07:15:03 crc kubenswrapper[4937]: I0123 07:15:03.413067 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ea4cf97849249ae0c69be4f96702f50fe57f98858513969caf15302988ccdb4" Jan 23 07:15:03 crc kubenswrapper[4937]: I0123 07:15:03.413136 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx" Jan 23 07:15:03 crc kubenswrapper[4937]: I0123 07:15:03.797417 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v"] Jan 23 07:15:03 crc kubenswrapper[4937]: I0123 07:15:03.805439 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485830-txg4v"] Jan 23 07:15:04 crc kubenswrapper[4937]: I0123 07:15:04.538040 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecae0951-0752-4c13-98ab-6fa8f4f86c33" path="/var/lib/kubelet/pods/ecae0951-0752-4c13-98ab-6fa8f4f86c33/volumes" Jan 23 07:15:05 crc kubenswrapper[4937]: I0123 07:15:05.527496 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:15:05 crc kubenswrapper[4937]: E0123 07:15:05.528197 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:15:08 crc kubenswrapper[4937]: I0123 07:15:08.312405 4937 scope.go:117] "RemoveContainer" containerID="f819f4315b10e5f76bf96cfb7c0ea4f0e81f6a3794753f525b4786a3f2b6bd89" Jan 23 07:15:18 crc kubenswrapper[4937]: I0123 07:15:18.526807 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:15:18 crc kubenswrapper[4937]: E0123 07:15:18.527733 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:15:32 crc kubenswrapper[4937]: I0123 07:15:32.529463 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:15:32 crc kubenswrapper[4937]: E0123 07:15:32.530302 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:15:43 crc kubenswrapper[4937]: I0123 07:15:43.527345 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:15:43 crc kubenswrapper[4937]: E0123 07:15:43.528384 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:15:55 crc kubenswrapper[4937]: I0123 07:15:55.527334 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:15:55 crc kubenswrapper[4937]: E0123 07:15:55.528397 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:16:07 crc kubenswrapper[4937]: I0123 07:16:07.526760 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:16:07 crc kubenswrapper[4937]: E0123 07:16:07.527557 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:16:20 crc kubenswrapper[4937]: I0123 07:16:20.537719 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:16:20 crc kubenswrapper[4937]: E0123 07:16:20.538811 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:16:33 crc kubenswrapper[4937]: I0123 07:16:33.527371 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:16:33 crc kubenswrapper[4937]: E0123 07:16:33.528204 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:16:46 crc kubenswrapper[4937]: I0123 07:16:46.526172 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:16:46 crc kubenswrapper[4937]: E0123 07:16:46.531380 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:17:01 crc kubenswrapper[4937]: I0123 07:17:01.526919 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:17:01 crc kubenswrapper[4937]: E0123 07:17:01.528892 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:17:16 crc kubenswrapper[4937]: I0123 07:17:16.526806 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:17:16 crc kubenswrapper[4937]: E0123 07:17:16.527753 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:17:28 crc kubenswrapper[4937]: I0123 07:17:28.805988 4937 generic.go:334] "Generic (PLEG): container finished" podID="070d09b2-6b7b-4d86-976f-aafd5c706f42" containerID="40191a075962ac799e90328ee243eaad0472ad6228fe9ae2af2dd8649ba3a48e" exitCode=0 Jan 23 07:17:28 crc kubenswrapper[4937]: I0123 07:17:28.806048 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" event={"ID":"070d09b2-6b7b-4d86-976f-aafd5c706f42","Type":"ContainerDied","Data":"40191a075962ac799e90328ee243eaad0472ad6228fe9ae2af2dd8649ba3a48e"} Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.324832 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.417113 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-ssh-key-openstack-edpm-ipam\") pod \"070d09b2-6b7b-4d86-976f-aafd5c706f42\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.417221 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-libvirt-combined-ca-bundle\") pod \"070d09b2-6b7b-4d86-976f-aafd5c706f42\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.417362 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktx5r\" (UniqueName: \"kubernetes.io/projected/070d09b2-6b7b-4d86-976f-aafd5c706f42-kube-api-access-ktx5r\") pod \"070d09b2-6b7b-4d86-976f-aafd5c706f42\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.417401 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-libvirt-secret-0\") pod \"070d09b2-6b7b-4d86-976f-aafd5c706f42\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.417454 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-inventory\") pod \"070d09b2-6b7b-4d86-976f-aafd5c706f42\" (UID: \"070d09b2-6b7b-4d86-976f-aafd5c706f42\") " Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.422657 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "070d09b2-6b7b-4d86-976f-aafd5c706f42" (UID: "070d09b2-6b7b-4d86-976f-aafd5c706f42"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.424481 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/070d09b2-6b7b-4d86-976f-aafd5c706f42-kube-api-access-ktx5r" (OuterVolumeSpecName: "kube-api-access-ktx5r") pod "070d09b2-6b7b-4d86-976f-aafd5c706f42" (UID: "070d09b2-6b7b-4d86-976f-aafd5c706f42"). InnerVolumeSpecName "kube-api-access-ktx5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.451700 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "070d09b2-6b7b-4d86-976f-aafd5c706f42" (UID: "070d09b2-6b7b-4d86-976f-aafd5c706f42"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.454648 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "070d09b2-6b7b-4d86-976f-aafd5c706f42" (UID: "070d09b2-6b7b-4d86-976f-aafd5c706f42"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.456962 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-inventory" (OuterVolumeSpecName: "inventory") pod "070d09b2-6b7b-4d86-976f-aafd5c706f42" (UID: "070d09b2-6b7b-4d86-976f-aafd5c706f42"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.519726 4937 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.519788 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktx5r\" (UniqueName: \"kubernetes.io/projected/070d09b2-6b7b-4d86-976f-aafd5c706f42-kube-api-access-ktx5r\") on node \"crc\" DevicePath \"\"" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.519799 4937 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.519809 4937 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.519818 4937 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/070d09b2-6b7b-4d86-976f-aafd5c706f42-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.824498 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" event={"ID":"070d09b2-6b7b-4d86-976f-aafd5c706f42","Type":"ContainerDied","Data":"c9eddb24cd4420778d13f0d22eb67bb5e943bb8e7a43f2782789192605346c82"} Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.824535 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9eddb24cd4420778d13f0d22eb67bb5e943bb8e7a43f2782789192605346c82" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.824553 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-86xtr" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.950940 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns"] Jan 23 07:17:30 crc kubenswrapper[4937]: E0123 07:17:30.951613 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070d09b2-6b7b-4d86-976f-aafd5c706f42" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.951707 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="070d09b2-6b7b-4d86-976f-aafd5c706f42" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 07:17:30 crc kubenswrapper[4937]: E0123 07:17:30.951822 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b870fb-925f-4021-9ada-8977fd5b9d9c" containerName="collect-profiles" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.951879 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b870fb-925f-4021-9ada-8977fd5b9d9c" containerName="collect-profiles" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.952129 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="48b870fb-925f-4021-9ada-8977fd5b9d9c" containerName="collect-profiles" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.952207 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="070d09b2-6b7b-4d86-976f-aafd5c706f42" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.953049 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.959722 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.960635 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.960843 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.961015 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.961240 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.961387 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rm45q" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.961447 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 07:17:30 crc kubenswrapper[4937]: I0123 07:17:30.965499 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns"] Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.027347 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.027457 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.027484 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.027513 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhscv\" (UniqueName: \"kubernetes.io/projected/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-kube-api-access-hhscv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.027533 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.027680 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.027710 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.027741 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.027766 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.129063 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhscv\" (UniqueName: \"kubernetes.io/projected/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-kube-api-access-hhscv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.129141 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.129199 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.129241 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.129277 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.129308 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.129396 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.129475 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.129510 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.130292 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.133419 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.133437 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.134005 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.134398 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.134938 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.147177 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.147317 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.149519 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhscv\" (UniqueName: \"kubernetes.io/projected/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-kube-api-access-hhscv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-cw5ns\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.324023 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.527416 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:17:31 crc kubenswrapper[4937]: E0123 07:17:31.528007 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.895697 4937 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 07:17:31 crc kubenswrapper[4937]: I0123 07:17:31.901720 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns"] Jan 23 07:17:32 crc kubenswrapper[4937]: I0123 07:17:32.857289 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" event={"ID":"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7","Type":"ContainerStarted","Data":"e5e8a7005633c1081dd2f1f3466ef52ddca4b10920fdd4da76948b32e7f879d1"} Jan 23 07:17:32 crc kubenswrapper[4937]: I0123 07:17:32.857637 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" event={"ID":"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7","Type":"ContainerStarted","Data":"abb5b3b52c7877d5c75683a43a88be7fa60a6e3e532cebfd235df83d06f622bb"} Jan 23 07:17:32 crc kubenswrapper[4937]: I0123 07:17:32.888003 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" podStartSLOduration=2.386008203 podStartE2EDuration="2.887975322s" podCreationTimestamp="2026-01-23 07:17:30 +0000 UTC" firstStartedPulling="2026-01-23 07:17:31.895453985 +0000 UTC m=+2651.699220638" lastFinishedPulling="2026-01-23 07:17:32.397421104 +0000 UTC m=+2652.201187757" observedRunningTime="2026-01-23 07:17:32.873643083 +0000 UTC m=+2652.677409746" watchObservedRunningTime="2026-01-23 07:17:32.887975322 +0000 UTC m=+2652.691742015" Jan 23 07:17:43 crc kubenswrapper[4937]: I0123 07:17:43.526340 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:17:43 crc kubenswrapper[4937]: E0123 07:17:43.527368 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:17:54 crc kubenswrapper[4937]: I0123 07:17:54.526857 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:17:54 crc kubenswrapper[4937]: E0123 07:17:54.527853 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:18:06 crc kubenswrapper[4937]: I0123 07:18:06.527280 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:18:06 crc kubenswrapper[4937]: E0123 07:18:06.528207 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:18:20 crc kubenswrapper[4937]: I0123 07:18:20.534412 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:18:20 crc kubenswrapper[4937]: E0123 07:18:20.535882 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:18:31 crc kubenswrapper[4937]: I0123 07:18:31.526961 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:18:31 crc kubenswrapper[4937]: E0123 07:18:31.527679 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:18:41 crc kubenswrapper[4937]: I0123 07:18:41.375846 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-65ck2"] Jan 23 07:18:41 crc kubenswrapper[4937]: I0123 07:18:41.378630 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65ck2" Jan 23 07:18:41 crc kubenswrapper[4937]: I0123 07:18:41.389071 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-65ck2"] Jan 23 07:18:41 crc kubenswrapper[4937]: I0123 07:18:41.505583 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b2cccb4-af50-4507-854c-605cd6b9c890-utilities\") pod \"redhat-operators-65ck2\" (UID: \"7b2cccb4-af50-4507-854c-605cd6b9c890\") " pod="openshift-marketplace/redhat-operators-65ck2" Jan 23 07:18:41 crc kubenswrapper[4937]: I0123 07:18:41.505728 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b2cccb4-af50-4507-854c-605cd6b9c890-catalog-content\") pod \"redhat-operators-65ck2\" (UID: \"7b2cccb4-af50-4507-854c-605cd6b9c890\") " pod="openshift-marketplace/redhat-operators-65ck2" Jan 23 07:18:41 crc kubenswrapper[4937]: I0123 07:18:41.506132 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2f6m\" (UniqueName: \"kubernetes.io/projected/7b2cccb4-af50-4507-854c-605cd6b9c890-kube-api-access-p2f6m\") pod \"redhat-operators-65ck2\" (UID: \"7b2cccb4-af50-4507-854c-605cd6b9c890\") " pod="openshift-marketplace/redhat-operators-65ck2" Jan 23 07:18:41 crc kubenswrapper[4937]: I0123 07:18:41.608442 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b2cccb4-af50-4507-854c-605cd6b9c890-catalog-content\") pod \"redhat-operators-65ck2\" (UID: \"7b2cccb4-af50-4507-854c-605cd6b9c890\") " pod="openshift-marketplace/redhat-operators-65ck2" Jan 23 07:18:41 crc kubenswrapper[4937]: I0123 07:18:41.608567 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2f6m\" (UniqueName: \"kubernetes.io/projected/7b2cccb4-af50-4507-854c-605cd6b9c890-kube-api-access-p2f6m\") pod \"redhat-operators-65ck2\" (UID: \"7b2cccb4-af50-4507-854c-605cd6b9c890\") " pod="openshift-marketplace/redhat-operators-65ck2" Jan 23 07:18:41 crc kubenswrapper[4937]: I0123 07:18:41.608670 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b2cccb4-af50-4507-854c-605cd6b9c890-utilities\") pod \"redhat-operators-65ck2\" (UID: \"7b2cccb4-af50-4507-854c-605cd6b9c890\") " pod="openshift-marketplace/redhat-operators-65ck2" Jan 23 07:18:41 crc kubenswrapper[4937]: I0123 07:18:41.609438 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b2cccb4-af50-4507-854c-605cd6b9c890-utilities\") pod \"redhat-operators-65ck2\" (UID: \"7b2cccb4-af50-4507-854c-605cd6b9c890\") " pod="openshift-marketplace/redhat-operators-65ck2" Jan 23 07:18:41 crc kubenswrapper[4937]: I0123 07:18:41.609883 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b2cccb4-af50-4507-854c-605cd6b9c890-catalog-content\") pod \"redhat-operators-65ck2\" (UID: \"7b2cccb4-af50-4507-854c-605cd6b9c890\") " pod="openshift-marketplace/redhat-operators-65ck2" Jan 23 07:18:41 crc kubenswrapper[4937]: I0123 07:18:41.630034 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2f6m\" (UniqueName: \"kubernetes.io/projected/7b2cccb4-af50-4507-854c-605cd6b9c890-kube-api-access-p2f6m\") pod \"redhat-operators-65ck2\" (UID: \"7b2cccb4-af50-4507-854c-605cd6b9c890\") " pod="openshift-marketplace/redhat-operators-65ck2" Jan 23 07:18:41 crc kubenswrapper[4937]: I0123 07:18:41.732422 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65ck2" Jan 23 07:18:42 crc kubenswrapper[4937]: I0123 07:18:42.189279 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-65ck2"] Jan 23 07:18:42 crc kubenswrapper[4937]: I0123 07:18:42.564239 4937 generic.go:334] "Generic (PLEG): container finished" podID="7b2cccb4-af50-4507-854c-605cd6b9c890" containerID="f16b82019c2a1bdf275331b1d7ff6fc5a5721bcc4aee0187acd3505bf9bf7e3c" exitCode=0 Jan 23 07:18:42 crc kubenswrapper[4937]: I0123 07:18:42.564279 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65ck2" event={"ID":"7b2cccb4-af50-4507-854c-605cd6b9c890","Type":"ContainerDied","Data":"f16b82019c2a1bdf275331b1d7ff6fc5a5721bcc4aee0187acd3505bf9bf7e3c"} Jan 23 07:18:42 crc kubenswrapper[4937]: I0123 07:18:42.564304 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65ck2" event={"ID":"7b2cccb4-af50-4507-854c-605cd6b9c890","Type":"ContainerStarted","Data":"0e8f39089b2bbd7493133f08144b77efffee0bdcf4d1513f9ef72822e78b954d"} Jan 23 07:18:43 crc kubenswrapper[4937]: I0123 07:18:43.577668 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65ck2" event={"ID":"7b2cccb4-af50-4507-854c-605cd6b9c890","Type":"ContainerStarted","Data":"79d53dc8e63f50ab0524011ac0832beefd824280252ccacf0f062039ab2c559b"} Jan 23 07:18:44 crc kubenswrapper[4937]: I0123 07:18:44.527300 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:18:44 crc kubenswrapper[4937]: E0123 07:18:44.527799 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:18:44 crc kubenswrapper[4937]: I0123 07:18:44.587819 4937 generic.go:334] "Generic (PLEG): container finished" podID="7b2cccb4-af50-4507-854c-605cd6b9c890" containerID="79d53dc8e63f50ab0524011ac0832beefd824280252ccacf0f062039ab2c559b" exitCode=0 Jan 23 07:18:44 crc kubenswrapper[4937]: I0123 07:18:44.587863 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65ck2" event={"ID":"7b2cccb4-af50-4507-854c-605cd6b9c890","Type":"ContainerDied","Data":"79d53dc8e63f50ab0524011ac0832beefd824280252ccacf0f062039ab2c559b"} Jan 23 07:18:46 crc kubenswrapper[4937]: I0123 07:18:46.612936 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65ck2" event={"ID":"7b2cccb4-af50-4507-854c-605cd6b9c890","Type":"ContainerStarted","Data":"89e2043e00bf17464a7aa8437948dd4c95e3ad3d66190725ef8a3593698a836a"} Jan 23 07:18:46 crc kubenswrapper[4937]: I0123 07:18:46.637444 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-65ck2" podStartSLOduration=3.196192665 podStartE2EDuration="5.637425864s" podCreationTimestamp="2026-01-23 07:18:41 +0000 UTC" firstStartedPulling="2026-01-23 07:18:42.565769344 +0000 UTC m=+2722.369535997" lastFinishedPulling="2026-01-23 07:18:45.007002543 +0000 UTC m=+2724.810769196" observedRunningTime="2026-01-23 07:18:46.6298988 +0000 UTC m=+2726.433665463" watchObservedRunningTime="2026-01-23 07:18:46.637425864 +0000 UTC m=+2726.441192517" Jan 23 07:18:51 crc kubenswrapper[4937]: I0123 07:18:51.732687 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-65ck2" Jan 23 07:18:51 crc kubenswrapper[4937]: I0123 07:18:51.733289 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-65ck2" Jan 23 07:18:51 crc kubenswrapper[4937]: I0123 07:18:51.815480 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-65ck2" Jan 23 07:18:52 crc kubenswrapper[4937]: I0123 07:18:52.724253 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-65ck2" Jan 23 07:18:52 crc kubenswrapper[4937]: I0123 07:18:52.777916 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-65ck2"] Jan 23 07:18:54 crc kubenswrapper[4937]: I0123 07:18:54.684772 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-65ck2" podUID="7b2cccb4-af50-4507-854c-605cd6b9c890" containerName="registry-server" containerID="cri-o://89e2043e00bf17464a7aa8437948dd4c95e3ad3d66190725ef8a3593698a836a" gracePeriod=2 Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.189395 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65ck2" Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.308185 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2f6m\" (UniqueName: \"kubernetes.io/projected/7b2cccb4-af50-4507-854c-605cd6b9c890-kube-api-access-p2f6m\") pod \"7b2cccb4-af50-4507-854c-605cd6b9c890\" (UID: \"7b2cccb4-af50-4507-854c-605cd6b9c890\") " Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.308250 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b2cccb4-af50-4507-854c-605cd6b9c890-catalog-content\") pod \"7b2cccb4-af50-4507-854c-605cd6b9c890\" (UID: \"7b2cccb4-af50-4507-854c-605cd6b9c890\") " Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.308291 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b2cccb4-af50-4507-854c-605cd6b9c890-utilities\") pod \"7b2cccb4-af50-4507-854c-605cd6b9c890\" (UID: \"7b2cccb4-af50-4507-854c-605cd6b9c890\") " Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.309406 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b2cccb4-af50-4507-854c-605cd6b9c890-utilities" (OuterVolumeSpecName: "utilities") pod "7b2cccb4-af50-4507-854c-605cd6b9c890" (UID: "7b2cccb4-af50-4507-854c-605cd6b9c890"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.323216 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b2cccb4-af50-4507-854c-605cd6b9c890-kube-api-access-p2f6m" (OuterVolumeSpecName: "kube-api-access-p2f6m") pod "7b2cccb4-af50-4507-854c-605cd6b9c890" (UID: "7b2cccb4-af50-4507-854c-605cd6b9c890"). InnerVolumeSpecName "kube-api-access-p2f6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.410336 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2f6m\" (UniqueName: \"kubernetes.io/projected/7b2cccb4-af50-4507-854c-605cd6b9c890-kube-api-access-p2f6m\") on node \"crc\" DevicePath \"\"" Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.410379 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b2cccb4-af50-4507-854c-605cd6b9c890-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.426755 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b2cccb4-af50-4507-854c-605cd6b9c890-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b2cccb4-af50-4507-854c-605cd6b9c890" (UID: "7b2cccb4-af50-4507-854c-605cd6b9c890"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.511997 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b2cccb4-af50-4507-854c-605cd6b9c890-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.696574 4937 generic.go:334] "Generic (PLEG): container finished" podID="7b2cccb4-af50-4507-854c-605cd6b9c890" containerID="89e2043e00bf17464a7aa8437948dd4c95e3ad3d66190725ef8a3593698a836a" exitCode=0 Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.696631 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65ck2" event={"ID":"7b2cccb4-af50-4507-854c-605cd6b9c890","Type":"ContainerDied","Data":"89e2043e00bf17464a7aa8437948dd4c95e3ad3d66190725ef8a3593698a836a"} Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.696671 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-65ck2" Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.696693 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-65ck2" event={"ID":"7b2cccb4-af50-4507-854c-605cd6b9c890","Type":"ContainerDied","Data":"0e8f39089b2bbd7493133f08144b77efffee0bdcf4d1513f9ef72822e78b954d"} Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.696717 4937 scope.go:117] "RemoveContainer" containerID="89e2043e00bf17464a7aa8437948dd4c95e3ad3d66190725ef8a3593698a836a" Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.731656 4937 scope.go:117] "RemoveContainer" containerID="79d53dc8e63f50ab0524011ac0832beefd824280252ccacf0f062039ab2c559b" Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.738226 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-65ck2"] Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.756030 4937 scope.go:117] "RemoveContainer" containerID="f16b82019c2a1bdf275331b1d7ff6fc5a5721bcc4aee0187acd3505bf9bf7e3c" Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.757771 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-65ck2"] Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.807412 4937 scope.go:117] "RemoveContainer" containerID="89e2043e00bf17464a7aa8437948dd4c95e3ad3d66190725ef8a3593698a836a" Jan 23 07:18:55 crc kubenswrapper[4937]: E0123 07:18:55.808169 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89e2043e00bf17464a7aa8437948dd4c95e3ad3d66190725ef8a3593698a836a\": container with ID starting with 89e2043e00bf17464a7aa8437948dd4c95e3ad3d66190725ef8a3593698a836a not found: ID does not exist" containerID="89e2043e00bf17464a7aa8437948dd4c95e3ad3d66190725ef8a3593698a836a" Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.808219 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89e2043e00bf17464a7aa8437948dd4c95e3ad3d66190725ef8a3593698a836a"} err="failed to get container status \"89e2043e00bf17464a7aa8437948dd4c95e3ad3d66190725ef8a3593698a836a\": rpc error: code = NotFound desc = could not find container \"89e2043e00bf17464a7aa8437948dd4c95e3ad3d66190725ef8a3593698a836a\": container with ID starting with 89e2043e00bf17464a7aa8437948dd4c95e3ad3d66190725ef8a3593698a836a not found: ID does not exist" Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.808250 4937 scope.go:117] "RemoveContainer" containerID="79d53dc8e63f50ab0524011ac0832beefd824280252ccacf0f062039ab2c559b" Jan 23 07:18:55 crc kubenswrapper[4937]: E0123 07:18:55.808555 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79d53dc8e63f50ab0524011ac0832beefd824280252ccacf0f062039ab2c559b\": container with ID starting with 79d53dc8e63f50ab0524011ac0832beefd824280252ccacf0f062039ab2c559b not found: ID does not exist" containerID="79d53dc8e63f50ab0524011ac0832beefd824280252ccacf0f062039ab2c559b" Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.808606 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79d53dc8e63f50ab0524011ac0832beefd824280252ccacf0f062039ab2c559b"} err="failed to get container status \"79d53dc8e63f50ab0524011ac0832beefd824280252ccacf0f062039ab2c559b\": rpc error: code = NotFound desc = could not find container \"79d53dc8e63f50ab0524011ac0832beefd824280252ccacf0f062039ab2c559b\": container with ID starting with 79d53dc8e63f50ab0524011ac0832beefd824280252ccacf0f062039ab2c559b not found: ID does not exist" Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.808629 4937 scope.go:117] "RemoveContainer" containerID="f16b82019c2a1bdf275331b1d7ff6fc5a5721bcc4aee0187acd3505bf9bf7e3c" Jan 23 07:18:55 crc kubenswrapper[4937]: E0123 07:18:55.808887 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f16b82019c2a1bdf275331b1d7ff6fc5a5721bcc4aee0187acd3505bf9bf7e3c\": container with ID starting with f16b82019c2a1bdf275331b1d7ff6fc5a5721bcc4aee0187acd3505bf9bf7e3c not found: ID does not exist" containerID="f16b82019c2a1bdf275331b1d7ff6fc5a5721bcc4aee0187acd3505bf9bf7e3c" Jan 23 07:18:55 crc kubenswrapper[4937]: I0123 07:18:55.808922 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16b82019c2a1bdf275331b1d7ff6fc5a5721bcc4aee0187acd3505bf9bf7e3c"} err="failed to get container status \"f16b82019c2a1bdf275331b1d7ff6fc5a5721bcc4aee0187acd3505bf9bf7e3c\": rpc error: code = NotFound desc = could not find container \"f16b82019c2a1bdf275331b1d7ff6fc5a5721bcc4aee0187acd3505bf9bf7e3c\": container with ID starting with f16b82019c2a1bdf275331b1d7ff6fc5a5721bcc4aee0187acd3505bf9bf7e3c not found: ID does not exist" Jan 23 07:18:56 crc kubenswrapper[4937]: I0123 07:18:56.543927 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b2cccb4-af50-4507-854c-605cd6b9c890" path="/var/lib/kubelet/pods/7b2cccb4-af50-4507-854c-605cd6b9c890/volumes" Jan 23 07:18:59 crc kubenswrapper[4937]: I0123 07:18:59.526560 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:18:59 crc kubenswrapper[4937]: E0123 07:18:59.527167 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:19:11 crc kubenswrapper[4937]: I0123 07:19:11.526804 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:19:11 crc kubenswrapper[4937]: I0123 07:19:11.860961 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"4c7a1b80192dda9392c131e247b19f3c7c53deeda7bfef89da6fd5a6fe441dd0"} Jan 23 07:19:39 crc kubenswrapper[4937]: I0123 07:19:39.865981 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j2qsf"] Jan 23 07:19:39 crc kubenswrapper[4937]: E0123 07:19:39.866863 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b2cccb4-af50-4507-854c-605cd6b9c890" containerName="extract-content" Jan 23 07:19:39 crc kubenswrapper[4937]: I0123 07:19:39.866875 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b2cccb4-af50-4507-854c-605cd6b9c890" containerName="extract-content" Jan 23 07:19:39 crc kubenswrapper[4937]: E0123 07:19:39.866912 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b2cccb4-af50-4507-854c-605cd6b9c890" containerName="extract-utilities" Jan 23 07:19:39 crc kubenswrapper[4937]: I0123 07:19:39.866918 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b2cccb4-af50-4507-854c-605cd6b9c890" containerName="extract-utilities" Jan 23 07:19:39 crc kubenswrapper[4937]: E0123 07:19:39.866929 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b2cccb4-af50-4507-854c-605cd6b9c890" containerName="registry-server" Jan 23 07:19:39 crc kubenswrapper[4937]: I0123 07:19:39.866937 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b2cccb4-af50-4507-854c-605cd6b9c890" containerName="registry-server" Jan 23 07:19:39 crc kubenswrapper[4937]: I0123 07:19:39.867121 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b2cccb4-af50-4507-854c-605cd6b9c890" containerName="registry-server" Jan 23 07:19:39 crc kubenswrapper[4937]: I0123 07:19:39.870572 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j2qsf" Jan 23 07:19:39 crc kubenswrapper[4937]: I0123 07:19:39.935895 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j2qsf"] Jan 23 07:19:39 crc kubenswrapper[4937]: I0123 07:19:39.985900 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6f75\" (UniqueName: \"kubernetes.io/projected/8713920d-bf20-44d0-809c-985e3db0d676-kube-api-access-f6f75\") pod \"certified-operators-j2qsf\" (UID: \"8713920d-bf20-44d0-809c-985e3db0d676\") " pod="openshift-marketplace/certified-operators-j2qsf" Jan 23 07:19:39 crc kubenswrapper[4937]: I0123 07:19:39.986038 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8713920d-bf20-44d0-809c-985e3db0d676-catalog-content\") pod \"certified-operators-j2qsf\" (UID: \"8713920d-bf20-44d0-809c-985e3db0d676\") " pod="openshift-marketplace/certified-operators-j2qsf" Jan 23 07:19:39 crc kubenswrapper[4937]: I0123 07:19:39.986093 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8713920d-bf20-44d0-809c-985e3db0d676-utilities\") pod \"certified-operators-j2qsf\" (UID: \"8713920d-bf20-44d0-809c-985e3db0d676\") " pod="openshift-marketplace/certified-operators-j2qsf" Jan 23 07:19:40 crc kubenswrapper[4937]: I0123 07:19:40.087835 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8713920d-bf20-44d0-809c-985e3db0d676-catalog-content\") pod \"certified-operators-j2qsf\" (UID: \"8713920d-bf20-44d0-809c-985e3db0d676\") " pod="openshift-marketplace/certified-operators-j2qsf" Jan 23 07:19:40 crc kubenswrapper[4937]: I0123 07:19:40.087912 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8713920d-bf20-44d0-809c-985e3db0d676-utilities\") pod \"certified-operators-j2qsf\" (UID: \"8713920d-bf20-44d0-809c-985e3db0d676\") " pod="openshift-marketplace/certified-operators-j2qsf" Jan 23 07:19:40 crc kubenswrapper[4937]: I0123 07:19:40.087962 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6f75\" (UniqueName: \"kubernetes.io/projected/8713920d-bf20-44d0-809c-985e3db0d676-kube-api-access-f6f75\") pod \"certified-operators-j2qsf\" (UID: \"8713920d-bf20-44d0-809c-985e3db0d676\") " pod="openshift-marketplace/certified-operators-j2qsf" Jan 23 07:19:40 crc kubenswrapper[4937]: I0123 07:19:40.088313 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8713920d-bf20-44d0-809c-985e3db0d676-catalog-content\") pod \"certified-operators-j2qsf\" (UID: \"8713920d-bf20-44d0-809c-985e3db0d676\") " pod="openshift-marketplace/certified-operators-j2qsf" Jan 23 07:19:40 crc kubenswrapper[4937]: I0123 07:19:40.088611 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8713920d-bf20-44d0-809c-985e3db0d676-utilities\") pod \"certified-operators-j2qsf\" (UID: \"8713920d-bf20-44d0-809c-985e3db0d676\") " pod="openshift-marketplace/certified-operators-j2qsf" Jan 23 07:19:40 crc kubenswrapper[4937]: I0123 07:19:40.117060 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6f75\" (UniqueName: \"kubernetes.io/projected/8713920d-bf20-44d0-809c-985e3db0d676-kube-api-access-f6f75\") pod \"certified-operators-j2qsf\" (UID: \"8713920d-bf20-44d0-809c-985e3db0d676\") " pod="openshift-marketplace/certified-operators-j2qsf" Jan 23 07:19:40 crc kubenswrapper[4937]: I0123 07:19:40.252708 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j2qsf" Jan 23 07:19:40 crc kubenswrapper[4937]: I0123 07:19:40.733290 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j2qsf"] Jan 23 07:19:41 crc kubenswrapper[4937]: I0123 07:19:41.168411 4937 generic.go:334] "Generic (PLEG): container finished" podID="8713920d-bf20-44d0-809c-985e3db0d676" containerID="ecf1f2490ae040d7c5e61d14cc7a71b0b20891f6864940951533280b219e7fb2" exitCode=0 Jan 23 07:19:41 crc kubenswrapper[4937]: I0123 07:19:41.168515 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j2qsf" event={"ID":"8713920d-bf20-44d0-809c-985e3db0d676","Type":"ContainerDied","Data":"ecf1f2490ae040d7c5e61d14cc7a71b0b20891f6864940951533280b219e7fb2"} Jan 23 07:19:41 crc kubenswrapper[4937]: I0123 07:19:41.170182 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j2qsf" event={"ID":"8713920d-bf20-44d0-809c-985e3db0d676","Type":"ContainerStarted","Data":"5f602b2b391f3596a46ecce9c4430e24226abdb1086962e66a8d4ced4dfd19b4"} Jan 23 07:19:42 crc kubenswrapper[4937]: I0123 07:19:42.179503 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j2qsf" event={"ID":"8713920d-bf20-44d0-809c-985e3db0d676","Type":"ContainerStarted","Data":"89e597a5ff56bbf7adc92643df652847ce5bba12381095d8b04b9df5cca8f7aa"} Jan 23 07:19:43 crc kubenswrapper[4937]: I0123 07:19:43.191301 4937 generic.go:334] "Generic (PLEG): container finished" podID="8713920d-bf20-44d0-809c-985e3db0d676" containerID="89e597a5ff56bbf7adc92643df652847ce5bba12381095d8b04b9df5cca8f7aa" exitCode=0 Jan 23 07:19:43 crc kubenswrapper[4937]: I0123 07:19:43.191392 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j2qsf" event={"ID":"8713920d-bf20-44d0-809c-985e3db0d676","Type":"ContainerDied","Data":"89e597a5ff56bbf7adc92643df652847ce5bba12381095d8b04b9df5cca8f7aa"} Jan 23 07:19:44 crc kubenswrapper[4937]: I0123 07:19:44.205813 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j2qsf" event={"ID":"8713920d-bf20-44d0-809c-985e3db0d676","Type":"ContainerStarted","Data":"cbd90b3df73266ba59e3cab572c73e96b4bb73ce8c88e42d94feb3307861c9d6"} Jan 23 07:19:44 crc kubenswrapper[4937]: I0123 07:19:44.236385 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j2qsf" podStartSLOduration=2.714195034 podStartE2EDuration="5.236358144s" podCreationTimestamp="2026-01-23 07:19:39 +0000 UTC" firstStartedPulling="2026-01-23 07:19:41.170992674 +0000 UTC m=+2780.974759337" lastFinishedPulling="2026-01-23 07:19:43.693155764 +0000 UTC m=+2783.496922447" observedRunningTime="2026-01-23 07:19:44.224618164 +0000 UTC m=+2784.028384807" watchObservedRunningTime="2026-01-23 07:19:44.236358144 +0000 UTC m=+2784.040124827" Jan 23 07:19:50 crc kubenswrapper[4937]: I0123 07:19:50.265550 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j2qsf" Jan 23 07:19:50 crc kubenswrapper[4937]: I0123 07:19:50.266028 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j2qsf" Jan 23 07:19:50 crc kubenswrapper[4937]: I0123 07:19:50.313827 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j2qsf" Jan 23 07:19:51 crc kubenswrapper[4937]: I0123 07:19:51.326166 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j2qsf" Jan 23 07:19:51 crc kubenswrapper[4937]: I0123 07:19:51.381340 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j2qsf"] Jan 23 07:19:53 crc kubenswrapper[4937]: I0123 07:19:53.306371 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j2qsf" podUID="8713920d-bf20-44d0-809c-985e3db0d676" containerName="registry-server" containerID="cri-o://cbd90b3df73266ba59e3cab572c73e96b4bb73ce8c88e42d94feb3307861c9d6" gracePeriod=2 Jan 23 07:19:53 crc kubenswrapper[4937]: I0123 07:19:53.858518 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j2qsf" Jan 23 07:19:53 crc kubenswrapper[4937]: I0123 07:19:53.925423 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8713920d-bf20-44d0-809c-985e3db0d676-utilities\") pod \"8713920d-bf20-44d0-809c-985e3db0d676\" (UID: \"8713920d-bf20-44d0-809c-985e3db0d676\") " Jan 23 07:19:53 crc kubenswrapper[4937]: I0123 07:19:53.925570 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6f75\" (UniqueName: \"kubernetes.io/projected/8713920d-bf20-44d0-809c-985e3db0d676-kube-api-access-f6f75\") pod \"8713920d-bf20-44d0-809c-985e3db0d676\" (UID: \"8713920d-bf20-44d0-809c-985e3db0d676\") " Jan 23 07:19:53 crc kubenswrapper[4937]: I0123 07:19:53.927056 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8713920d-bf20-44d0-809c-985e3db0d676-catalog-content\") pod \"8713920d-bf20-44d0-809c-985e3db0d676\" (UID: \"8713920d-bf20-44d0-809c-985e3db0d676\") " Jan 23 07:19:53 crc kubenswrapper[4937]: I0123 07:19:53.927168 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8713920d-bf20-44d0-809c-985e3db0d676-utilities" (OuterVolumeSpecName: "utilities") pod "8713920d-bf20-44d0-809c-985e3db0d676" (UID: "8713920d-bf20-44d0-809c-985e3db0d676"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:19:53 crc kubenswrapper[4937]: I0123 07:19:53.928046 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8713920d-bf20-44d0-809c-985e3db0d676-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:19:53 crc kubenswrapper[4937]: I0123 07:19:53.932679 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8713920d-bf20-44d0-809c-985e3db0d676-kube-api-access-f6f75" (OuterVolumeSpecName: "kube-api-access-f6f75") pod "8713920d-bf20-44d0-809c-985e3db0d676" (UID: "8713920d-bf20-44d0-809c-985e3db0d676"). InnerVolumeSpecName "kube-api-access-f6f75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:19:53 crc kubenswrapper[4937]: I0123 07:19:53.994947 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8713920d-bf20-44d0-809c-985e3db0d676-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8713920d-bf20-44d0-809c-985e3db0d676" (UID: "8713920d-bf20-44d0-809c-985e3db0d676"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:19:54 crc kubenswrapper[4937]: I0123 07:19:54.029905 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8713920d-bf20-44d0-809c-985e3db0d676-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:19:54 crc kubenswrapper[4937]: I0123 07:19:54.029940 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6f75\" (UniqueName: \"kubernetes.io/projected/8713920d-bf20-44d0-809c-985e3db0d676-kube-api-access-f6f75\") on node \"crc\" DevicePath \"\"" Jan 23 07:19:54 crc kubenswrapper[4937]: I0123 07:19:54.319842 4937 generic.go:334] "Generic (PLEG): container finished" podID="8713920d-bf20-44d0-809c-985e3db0d676" containerID="cbd90b3df73266ba59e3cab572c73e96b4bb73ce8c88e42d94feb3307861c9d6" exitCode=0 Jan 23 07:19:54 crc kubenswrapper[4937]: I0123 07:19:54.319884 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j2qsf" event={"ID":"8713920d-bf20-44d0-809c-985e3db0d676","Type":"ContainerDied","Data":"cbd90b3df73266ba59e3cab572c73e96b4bb73ce8c88e42d94feb3307861c9d6"} Jan 23 07:19:54 crc kubenswrapper[4937]: I0123 07:19:54.319911 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j2qsf" event={"ID":"8713920d-bf20-44d0-809c-985e3db0d676","Type":"ContainerDied","Data":"5f602b2b391f3596a46ecce9c4430e24226abdb1086962e66a8d4ced4dfd19b4"} Jan 23 07:19:54 crc kubenswrapper[4937]: I0123 07:19:54.319932 4937 scope.go:117] "RemoveContainer" containerID="cbd90b3df73266ba59e3cab572c73e96b4bb73ce8c88e42d94feb3307861c9d6" Jan 23 07:19:54 crc kubenswrapper[4937]: I0123 07:19:54.320047 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j2qsf" Jan 23 07:19:54 crc kubenswrapper[4937]: I0123 07:19:54.356025 4937 scope.go:117] "RemoveContainer" containerID="89e597a5ff56bbf7adc92643df652847ce5bba12381095d8b04b9df5cca8f7aa" Jan 23 07:19:54 crc kubenswrapper[4937]: I0123 07:19:54.375976 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j2qsf"] Jan 23 07:19:54 crc kubenswrapper[4937]: I0123 07:19:54.396072 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j2qsf"] Jan 23 07:19:54 crc kubenswrapper[4937]: I0123 07:19:54.403841 4937 scope.go:117] "RemoveContainer" containerID="ecf1f2490ae040d7c5e61d14cc7a71b0b20891f6864940951533280b219e7fb2" Jan 23 07:19:54 crc kubenswrapper[4937]: I0123 07:19:54.432813 4937 scope.go:117] "RemoveContainer" containerID="cbd90b3df73266ba59e3cab572c73e96b4bb73ce8c88e42d94feb3307861c9d6" Jan 23 07:19:54 crc kubenswrapper[4937]: E0123 07:19:54.433395 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbd90b3df73266ba59e3cab572c73e96b4bb73ce8c88e42d94feb3307861c9d6\": container with ID starting with cbd90b3df73266ba59e3cab572c73e96b4bb73ce8c88e42d94feb3307861c9d6 not found: ID does not exist" containerID="cbd90b3df73266ba59e3cab572c73e96b4bb73ce8c88e42d94feb3307861c9d6" Jan 23 07:19:54 crc kubenswrapper[4937]: I0123 07:19:54.433433 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd90b3df73266ba59e3cab572c73e96b4bb73ce8c88e42d94feb3307861c9d6"} err="failed to get container status \"cbd90b3df73266ba59e3cab572c73e96b4bb73ce8c88e42d94feb3307861c9d6\": rpc error: code = NotFound desc = could not find container \"cbd90b3df73266ba59e3cab572c73e96b4bb73ce8c88e42d94feb3307861c9d6\": container with ID starting with cbd90b3df73266ba59e3cab572c73e96b4bb73ce8c88e42d94feb3307861c9d6 not found: ID does not exist" Jan 23 07:19:54 crc kubenswrapper[4937]: I0123 07:19:54.433456 4937 scope.go:117] "RemoveContainer" containerID="89e597a5ff56bbf7adc92643df652847ce5bba12381095d8b04b9df5cca8f7aa" Jan 23 07:19:54 crc kubenswrapper[4937]: E0123 07:19:54.433930 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89e597a5ff56bbf7adc92643df652847ce5bba12381095d8b04b9df5cca8f7aa\": container with ID starting with 89e597a5ff56bbf7adc92643df652847ce5bba12381095d8b04b9df5cca8f7aa not found: ID does not exist" containerID="89e597a5ff56bbf7adc92643df652847ce5bba12381095d8b04b9df5cca8f7aa" Jan 23 07:19:54 crc kubenswrapper[4937]: I0123 07:19:54.433953 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89e597a5ff56bbf7adc92643df652847ce5bba12381095d8b04b9df5cca8f7aa"} err="failed to get container status \"89e597a5ff56bbf7adc92643df652847ce5bba12381095d8b04b9df5cca8f7aa\": rpc error: code = NotFound desc = could not find container \"89e597a5ff56bbf7adc92643df652847ce5bba12381095d8b04b9df5cca8f7aa\": container with ID starting with 89e597a5ff56bbf7adc92643df652847ce5bba12381095d8b04b9df5cca8f7aa not found: ID does not exist" Jan 23 07:19:54 crc kubenswrapper[4937]: I0123 07:19:54.433966 4937 scope.go:117] "RemoveContainer" containerID="ecf1f2490ae040d7c5e61d14cc7a71b0b20891f6864940951533280b219e7fb2" Jan 23 07:19:54 crc kubenswrapper[4937]: E0123 07:19:54.435795 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecf1f2490ae040d7c5e61d14cc7a71b0b20891f6864940951533280b219e7fb2\": container with ID starting with ecf1f2490ae040d7c5e61d14cc7a71b0b20891f6864940951533280b219e7fb2 not found: ID does not exist" containerID="ecf1f2490ae040d7c5e61d14cc7a71b0b20891f6864940951533280b219e7fb2" Jan 23 07:19:54 crc kubenswrapper[4937]: I0123 07:19:54.435836 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecf1f2490ae040d7c5e61d14cc7a71b0b20891f6864940951533280b219e7fb2"} err="failed to get container status \"ecf1f2490ae040d7c5e61d14cc7a71b0b20891f6864940951533280b219e7fb2\": rpc error: code = NotFound desc = could not find container \"ecf1f2490ae040d7c5e61d14cc7a71b0b20891f6864940951533280b219e7fb2\": container with ID starting with ecf1f2490ae040d7c5e61d14cc7a71b0b20891f6864940951533280b219e7fb2 not found: ID does not exist" Jan 23 07:19:54 crc kubenswrapper[4937]: I0123 07:19:54.537906 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8713920d-bf20-44d0-809c-985e3db0d676" path="/var/lib/kubelet/pods/8713920d-bf20-44d0-809c-985e3db0d676/volumes" Jan 23 07:20:15 crc kubenswrapper[4937]: I0123 07:20:15.549106 4937 generic.go:334] "Generic (PLEG): container finished" podID="d9e1570d-32bf-4347-a51b-1d88b1cc2ea7" containerID="e5e8a7005633c1081dd2f1f3466ef52ddca4b10920fdd4da76948b32e7f879d1" exitCode=0 Jan 23 07:20:15 crc kubenswrapper[4937]: I0123 07:20:15.549204 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" event={"ID":"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7","Type":"ContainerDied","Data":"e5e8a7005633c1081dd2f1f3466ef52ddca4b10920fdd4da76948b32e7f879d1"} Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.010531 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.145172 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-inventory\") pod \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.145213 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-combined-ca-bundle\") pod \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.145479 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhscv\" (UniqueName: \"kubernetes.io/projected/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-kube-api-access-hhscv\") pod \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.145496 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-cell1-compute-config-1\") pod \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.145519 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-migration-ssh-key-1\") pod \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.145535 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-cell1-compute-config-0\") pod \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.145678 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-extra-config-0\") pod \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.145702 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-ssh-key-openstack-edpm-ipam\") pod \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.145719 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-migration-ssh-key-0\") pod \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\" (UID: \"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7\") " Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.152107 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d9e1570d-32bf-4347-a51b-1d88b1cc2ea7" (UID: "d9e1570d-32bf-4347-a51b-1d88b1cc2ea7"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.152847 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-kube-api-access-hhscv" (OuterVolumeSpecName: "kube-api-access-hhscv") pod "d9e1570d-32bf-4347-a51b-1d88b1cc2ea7" (UID: "d9e1570d-32bf-4347-a51b-1d88b1cc2ea7"). InnerVolumeSpecName "kube-api-access-hhscv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.175096 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "d9e1570d-32bf-4347-a51b-1d88b1cc2ea7" (UID: "d9e1570d-32bf-4347-a51b-1d88b1cc2ea7"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.175889 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "d9e1570d-32bf-4347-a51b-1d88b1cc2ea7" (UID: "d9e1570d-32bf-4347-a51b-1d88b1cc2ea7"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.181996 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "d9e1570d-32bf-4347-a51b-1d88b1cc2ea7" (UID: "d9e1570d-32bf-4347-a51b-1d88b1cc2ea7"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.184283 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "d9e1570d-32bf-4347-a51b-1d88b1cc2ea7" (UID: "d9e1570d-32bf-4347-a51b-1d88b1cc2ea7"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.186523 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "d9e1570d-32bf-4347-a51b-1d88b1cc2ea7" (UID: "d9e1570d-32bf-4347-a51b-1d88b1cc2ea7"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.187502 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d9e1570d-32bf-4347-a51b-1d88b1cc2ea7" (UID: "d9e1570d-32bf-4347-a51b-1d88b1cc2ea7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.190798 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-inventory" (OuterVolumeSpecName: "inventory") pod "d9e1570d-32bf-4347-a51b-1d88b1cc2ea7" (UID: "d9e1570d-32bf-4347-a51b-1d88b1cc2ea7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.247709 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhscv\" (UniqueName: \"kubernetes.io/projected/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-kube-api-access-hhscv\") on node \"crc\" DevicePath \"\"" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.247747 4937 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.247761 4937 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.247773 4937 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.247787 4937 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.247798 4937 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.247810 4937 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.247822 4937 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.247835 4937 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9e1570d-32bf-4347-a51b-1d88b1cc2ea7-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.575877 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" event={"ID":"d9e1570d-32bf-4347-a51b-1d88b1cc2ea7","Type":"ContainerDied","Data":"abb5b3b52c7877d5c75683a43a88be7fa60a6e3e532cebfd235df83d06f622bb"} Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.576675 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abb5b3b52c7877d5c75683a43a88be7fa60a6e3e532cebfd235df83d06f622bb" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.575931 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-cw5ns" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.740824 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt"] Jan 23 07:20:17 crc kubenswrapper[4937]: E0123 07:20:17.741303 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8713920d-bf20-44d0-809c-985e3db0d676" containerName="registry-server" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.741323 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8713920d-bf20-44d0-809c-985e3db0d676" containerName="registry-server" Jan 23 07:20:17 crc kubenswrapper[4937]: E0123 07:20:17.741361 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8713920d-bf20-44d0-809c-985e3db0d676" containerName="extract-content" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.741368 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8713920d-bf20-44d0-809c-985e3db0d676" containerName="extract-content" Jan 23 07:20:17 crc kubenswrapper[4937]: E0123 07:20:17.741376 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9e1570d-32bf-4347-a51b-1d88b1cc2ea7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.741382 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9e1570d-32bf-4347-a51b-1d88b1cc2ea7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 07:20:17 crc kubenswrapper[4937]: E0123 07:20:17.741395 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8713920d-bf20-44d0-809c-985e3db0d676" containerName="extract-utilities" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.741401 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8713920d-bf20-44d0-809c-985e3db0d676" containerName="extract-utilities" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.741591 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9e1570d-32bf-4347-a51b-1d88b1cc2ea7" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.741619 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="8713920d-bf20-44d0-809c-985e3db0d676" containerName="registry-server" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.744095 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.746437 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.748803 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rm45q" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.749122 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.749676 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.750180 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.753557 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt"] Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.864828 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.864917 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.864976 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.865019 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.865055 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6rsj\" (UniqueName: \"kubernetes.io/projected/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-kube-api-access-d6rsj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.865103 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.865305 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.967998 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.968182 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.968296 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.968346 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.968409 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6rsj\" (UniqueName: \"kubernetes.io/projected/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-kube-api-access-d6rsj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.968504 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.968652 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.972430 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.972897 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.973132 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.974186 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.974401 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.975415 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:17 crc kubenswrapper[4937]: I0123 07:20:17.990167 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6rsj\" (UniqueName: \"kubernetes.io/projected/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-kube-api-access-d6rsj\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:18 crc kubenswrapper[4937]: I0123 07:20:18.076360 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:20:18 crc kubenswrapper[4937]: I0123 07:20:18.623191 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt"] Jan 23 07:20:19 crc kubenswrapper[4937]: I0123 07:20:19.613693 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" event={"ID":"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee","Type":"ContainerStarted","Data":"97a5b3e107ebcb4cd227d79b94dfc140055769d412a94f5432006d8faf098831"} Jan 23 07:20:19 crc kubenswrapper[4937]: I0123 07:20:19.614134 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" event={"ID":"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee","Type":"ContainerStarted","Data":"0b9df94dbb980ca521ec1c53e0cf0eaf72a2aca382a4cb8ed4c421e9586a9e2f"} Jan 23 07:20:19 crc kubenswrapper[4937]: I0123 07:20:19.637975 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" podStartSLOduration=2.236675327 podStartE2EDuration="2.637957411s" podCreationTimestamp="2026-01-23 07:20:17 +0000 UTC" firstStartedPulling="2026-01-23 07:20:18.62523949 +0000 UTC m=+2818.429006143" lastFinishedPulling="2026-01-23 07:20:19.026521524 +0000 UTC m=+2818.830288227" observedRunningTime="2026-01-23 07:20:19.635842024 +0000 UTC m=+2819.439608717" watchObservedRunningTime="2026-01-23 07:20:19.637957411 +0000 UTC m=+2819.441724064" Jan 23 07:21:37 crc kubenswrapper[4937]: I0123 07:21:37.724691 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:21:37 crc kubenswrapper[4937]: I0123 07:21:37.725444 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:22:07 crc kubenswrapper[4937]: I0123 07:22:07.726009 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:22:07 crc kubenswrapper[4937]: I0123 07:22:07.726520 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:22:37 crc kubenswrapper[4937]: I0123 07:22:37.724482 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:22:37 crc kubenswrapper[4937]: I0123 07:22:37.724902 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:22:37 crc kubenswrapper[4937]: I0123 07:22:37.724943 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 07:22:37 crc kubenswrapper[4937]: I0123 07:22:37.725749 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c7a1b80192dda9392c131e247b19f3c7c53deeda7bfef89da6fd5a6fe441dd0"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:22:37 crc kubenswrapper[4937]: I0123 07:22:37.725801 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://4c7a1b80192dda9392c131e247b19f3c7c53deeda7bfef89da6fd5a6fe441dd0" gracePeriod=600 Jan 23 07:22:37 crc kubenswrapper[4937]: I0123 07:22:37.999430 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="4c7a1b80192dda9392c131e247b19f3c7c53deeda7bfef89da6fd5a6fe441dd0" exitCode=0 Jan 23 07:22:37 crc kubenswrapper[4937]: I0123 07:22:37.999485 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"4c7a1b80192dda9392c131e247b19f3c7c53deeda7bfef89da6fd5a6fe441dd0"} Jan 23 07:22:38 crc kubenswrapper[4937]: I0123 07:22:37.999822 4937 scope.go:117] "RemoveContainer" containerID="bf8bb0a2ed26a1b43b5cfcb6f9b4aef8781264785b00e6fa7e42cfc46a80d6bf" Jan 23 07:22:39 crc kubenswrapper[4937]: I0123 07:22:39.015488 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c"} Jan 23 07:22:42 crc kubenswrapper[4937]: I0123 07:22:42.060620 4937 generic.go:334] "Generic (PLEG): container finished" podID="0d580b0b-d08e-4ffd-8a8d-e3a023d567ee" containerID="97a5b3e107ebcb4cd227d79b94dfc140055769d412a94f5432006d8faf098831" exitCode=0 Jan 23 07:22:42 crc kubenswrapper[4937]: I0123 07:22:42.060677 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" event={"ID":"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee","Type":"ContainerDied","Data":"97a5b3e107ebcb4cd227d79b94dfc140055769d412a94f5432006d8faf098831"} Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.503729 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.645815 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ceilometer-compute-config-data-0\") pod \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.645936 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ssh-key-openstack-edpm-ipam\") pod \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.646021 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ceilometer-compute-config-data-1\") pod \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.646181 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-telemetry-combined-ca-bundle\") pod \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.646228 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-inventory\") pod \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.646336 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ceilometer-compute-config-data-2\") pod \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.646470 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6rsj\" (UniqueName: \"kubernetes.io/projected/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-kube-api-access-d6rsj\") pod \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\" (UID: \"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee\") " Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.652763 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-kube-api-access-d6rsj" (OuterVolumeSpecName: "kube-api-access-d6rsj") pod "0d580b0b-d08e-4ffd-8a8d-e3a023d567ee" (UID: "0d580b0b-d08e-4ffd-8a8d-e3a023d567ee"). InnerVolumeSpecName "kube-api-access-d6rsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.654531 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "0d580b0b-d08e-4ffd-8a8d-e3a023d567ee" (UID: "0d580b0b-d08e-4ffd-8a8d-e3a023d567ee"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.680047 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "0d580b0b-d08e-4ffd-8a8d-e3a023d567ee" (UID: "0d580b0b-d08e-4ffd-8a8d-e3a023d567ee"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.681757 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0d580b0b-d08e-4ffd-8a8d-e3a023d567ee" (UID: "0d580b0b-d08e-4ffd-8a8d-e3a023d567ee"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.703929 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-inventory" (OuterVolumeSpecName: "inventory") pod "0d580b0b-d08e-4ffd-8a8d-e3a023d567ee" (UID: "0d580b0b-d08e-4ffd-8a8d-e3a023d567ee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.706062 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "0d580b0b-d08e-4ffd-8a8d-e3a023d567ee" (UID: "0d580b0b-d08e-4ffd-8a8d-e3a023d567ee"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.707255 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "0d580b0b-d08e-4ffd-8a8d-e3a023d567ee" (UID: "0d580b0b-d08e-4ffd-8a8d-e3a023d567ee"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.751479 4937 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.752122 4937 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.752136 4937 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.752147 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6rsj\" (UniqueName: \"kubernetes.io/projected/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-kube-api-access-d6rsj\") on node \"crc\" DevicePath \"\"" Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.752159 4937 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.752170 4937 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 07:22:43 crc kubenswrapper[4937]: I0123 07:22:43.752180 4937 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0d580b0b-d08e-4ffd-8a8d-e3a023d567ee-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 23 07:22:44 crc kubenswrapper[4937]: I0123 07:22:44.082505 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" event={"ID":"0d580b0b-d08e-4ffd-8a8d-e3a023d567ee","Type":"ContainerDied","Data":"0b9df94dbb980ca521ec1c53e0cf0eaf72a2aca382a4cb8ed4c421e9586a9e2f"} Jan 23 07:22:44 crc kubenswrapper[4937]: I0123 07:22:44.082546 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b9df94dbb980ca521ec1c53e0cf0eaf72a2aca382a4cb8ed4c421e9586a9e2f" Jan 23 07:22:44 crc kubenswrapper[4937]: I0123 07:22:44.082652 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.420137 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 23 07:23:21 crc kubenswrapper[4937]: E0123 07:23:21.421468 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d580b0b-d08e-4ffd-8a8d-e3a023d567ee" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.421489 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d580b0b-d08e-4ffd-8a8d-e3a023d567ee" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.421768 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d580b0b-d08e-4ffd-8a8d-e3a023d567ee" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.423122 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.425873 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.440916 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.503345 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.505324 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.508514 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.509375 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ce513c8-df23-4201-a86b-605c7b2ab636-config-data\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.509612 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.509688 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.509736 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-lib-modules\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.509785 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-etc-nvme\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.509817 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ce513c8-df23-4201-a86b-605c7b2ab636-scripts\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.509848 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-762l2\" (UniqueName: \"kubernetes.io/projected/8ce513c8-df23-4201-a86b-605c7b2ab636-kube-api-access-762l2\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.510037 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.510089 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-sys\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.510104 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ce513c8-df23-4201-a86b-605c7b2ab636-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.510201 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-dev\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.510274 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ce513c8-df23-4201-a86b-605c7b2ab636-config-data-custom\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.510325 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-run\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.510380 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.510455 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.526960 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.555952 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.557881 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.564277 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.572041 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.612298 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.612354 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.612408 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.612446 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.612434 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.612519 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ce513c8-df23-4201-a86b-605c7b2ab636-config-data\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.612725 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.613263 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-dev\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.613313 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.613331 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.613410 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.613442 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.613463 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.613487 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-sys\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.613518 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.613695 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.613801 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.613856 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.613917 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614020 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614047 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-lib-modules\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614029 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-lib-modules\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614094 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqrvm\" (UniqueName: \"kubernetes.io/projected/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-kube-api-access-jqrvm\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614134 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-run\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614200 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614222 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-etc-nvme\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614248 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ce513c8-df23-4201-a86b-605c7b2ab636-scripts\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614264 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614280 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614294 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614341 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-762l2\" (UniqueName: \"kubernetes.io/projected/8ce513c8-df23-4201-a86b-605c7b2ab636-kube-api-access-762l2\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614364 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614401 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614436 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614463 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614503 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614521 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614558 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614578 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-sys\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614608 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ce513c8-df23-4201-a86b-605c7b2ab636-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614630 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614669 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614690 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614708 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-dev\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614755 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7kd8\" (UniqueName: \"kubernetes.io/projected/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-kube-api-access-t7kd8\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614780 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ce513c8-df23-4201-a86b-605c7b2ab636-config-data-custom\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614804 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-run\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614824 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614841 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614877 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.614975 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-sys\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.615162 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-etc-nvme\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.615254 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-dev\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.615399 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.615489 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8ce513c8-df23-4201-a86b-605c7b2ab636-run\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.618143 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ce513c8-df23-4201-a86b-605c7b2ab636-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.623498 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ce513c8-df23-4201-a86b-605c7b2ab636-config-data\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.625317 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ce513c8-df23-4201-a86b-605c7b2ab636-config-data-custom\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.630396 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-762l2\" (UniqueName: \"kubernetes.io/projected/8ce513c8-df23-4201-a86b-605c7b2ab636-kube-api-access-762l2\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.633188 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ce513c8-df23-4201-a86b-605c7b2ab636-scripts\") pod \"cinder-backup-0\" (UID: \"8ce513c8-df23-4201-a86b-605c7b2ab636\") " pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.716821 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.716883 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.716919 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.716952 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.716996 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717001 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717021 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-dev\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717029 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717071 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717072 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717161 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717186 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-dev\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717218 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717274 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717300 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717393 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717436 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717473 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717516 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-sys\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717654 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717704 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717711 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717732 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-sys\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717733 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqrvm\" (UniqueName: \"kubernetes.io/projected/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-kube-api-access-jqrvm\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717758 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717729 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717797 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-run\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717824 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717857 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717874 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717891 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717856 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-run\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717931 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717968 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.717986 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.718000 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.718016 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.718056 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.718077 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.718112 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.718117 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.718137 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.718182 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.718218 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.718244 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.718309 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7kd8\" (UniqueName: \"kubernetes.io/projected/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-kube-api-access-t7kd8\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.719995 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.720033 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.720069 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.721039 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.721090 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.721265 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.721476 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.722635 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.724917 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.725236 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.725448 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.726679 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.738424 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7kd8\" (UniqueName: \"kubernetes.io/projected/23def76a-ea91-4c5f-ad7f-1370bc2e8dc4-kube-api-access-t7kd8\") pod \"cinder-volume-nfs-2-0\" (UID: \"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.738453 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqrvm\" (UniqueName: \"kubernetes.io/projected/6f5e0df9-7863-4a53-a050-dc572ec6bf8a-kube-api-access-jqrvm\") pod \"cinder-volume-nfs-0\" (UID: \"6f5e0df9-7863-4a53-a050-dc572ec6bf8a\") " pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.744520 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.821421 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:21 crc kubenswrapper[4937]: I0123 07:23:21.877534 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:22 crc kubenswrapper[4937]: I0123 07:23:22.330929 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 23 07:23:22 crc kubenswrapper[4937]: W0123 07:23:22.334493 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ce513c8_df23_4201_a86b_605c7b2ab636.slice/crio-e088bb12b5a84dd196e822104f64595e61470ea8ae2f3032b8879e98e00dc74f WatchSource:0}: Error finding container e088bb12b5a84dd196e822104f64595e61470ea8ae2f3032b8879e98e00dc74f: Status 404 returned error can't find the container with id e088bb12b5a84dd196e822104f64595e61470ea8ae2f3032b8879e98e00dc74f Jan 23 07:23:22 crc kubenswrapper[4937]: I0123 07:23:22.338943 4937 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 07:23:22 crc kubenswrapper[4937]: I0123 07:23:22.447091 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 23 07:23:22 crc kubenswrapper[4937]: I0123 07:23:22.450711 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"8ce513c8-df23-4201-a86b-605c7b2ab636","Type":"ContainerStarted","Data":"e088bb12b5a84dd196e822104f64595e61470ea8ae2f3032b8879e98e00dc74f"} Jan 23 07:23:22 crc kubenswrapper[4937]: W0123 07:23:22.493523 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f5e0df9_7863_4a53_a050_dc572ec6bf8a.slice/crio-39d736772c8edfbb0de35a46f5f24a6da5206ceac3c11945e82aa0547127bfc4 WatchSource:0}: Error finding container 39d736772c8edfbb0de35a46f5f24a6da5206ceac3c11945e82aa0547127bfc4: Status 404 returned error can't find the container with id 39d736772c8edfbb0de35a46f5f24a6da5206ceac3c11945e82aa0547127bfc4 Jan 23 07:23:22 crc kubenswrapper[4937]: I0123 07:23:22.611024 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 23 07:23:23 crc kubenswrapper[4937]: I0123 07:23:23.462771 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"6f5e0df9-7863-4a53-a050-dc572ec6bf8a","Type":"ContainerStarted","Data":"a764d79e7011121cfaeba7b7b9606b9e148dfa28cc94e8d2b13d636606f5f60c"} Jan 23 07:23:23 crc kubenswrapper[4937]: I0123 07:23:23.464388 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"6f5e0df9-7863-4a53-a050-dc572ec6bf8a","Type":"ContainerStarted","Data":"d320cdecc4f13f17d57cbf3b208912cf195e886ec85c66ebdd5fa95ade0a8d9c"} Jan 23 07:23:23 crc kubenswrapper[4937]: I0123 07:23:23.464437 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"6f5e0df9-7863-4a53-a050-dc572ec6bf8a","Type":"ContainerStarted","Data":"39d736772c8edfbb0de35a46f5f24a6da5206ceac3c11945e82aa0547127bfc4"} Jan 23 07:23:23 crc kubenswrapper[4937]: I0123 07:23:23.466093 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"8ce513c8-df23-4201-a86b-605c7b2ab636","Type":"ContainerStarted","Data":"b09224fed4142c4a69b64532b28f7dca64d7efbef6c357965f969e4499b28ad9"} Jan 23 07:23:23 crc kubenswrapper[4937]: I0123 07:23:23.466133 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"8ce513c8-df23-4201-a86b-605c7b2ab636","Type":"ContainerStarted","Data":"2c30c7aef8367393ee9d309d4752a75893445c89c73825a43d751c0dfcc0af45"} Jan 23 07:23:23 crc kubenswrapper[4937]: I0123 07:23:23.469465 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4","Type":"ContainerStarted","Data":"e51e040f37e0c08ab1056633845f00fd2439ab8fbb05627481110b9af6b6394e"} Jan 23 07:23:23 crc kubenswrapper[4937]: I0123 07:23:23.469509 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4","Type":"ContainerStarted","Data":"08bad6ea4fe7e9821dd75c20316c620c98cd3e6d35230cc974541be41c4a27bf"} Jan 23 07:23:23 crc kubenswrapper[4937]: I0123 07:23:23.469522 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"23def76a-ea91-4c5f-ad7f-1370bc2e8dc4","Type":"ContainerStarted","Data":"f7fe62329c32c476eabe1d9e4c70622cac03d460769e53bc4d0088f63e14d50f"} Jan 23 07:23:23 crc kubenswrapper[4937]: I0123 07:23:23.520889 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=2.271744569 podStartE2EDuration="2.520864086s" podCreationTimestamp="2026-01-23 07:23:21 +0000 UTC" firstStartedPulling="2026-01-23 07:23:22.496009253 +0000 UTC m=+3002.299775906" lastFinishedPulling="2026-01-23 07:23:22.74512877 +0000 UTC m=+3002.548895423" observedRunningTime="2026-01-23 07:23:23.493451828 +0000 UTC m=+3003.297218491" watchObservedRunningTime="2026-01-23 07:23:23.520864086 +0000 UTC m=+3003.324630739" Jan 23 07:23:23 crc kubenswrapper[4937]: I0123 07:23:23.533205 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=2.4498982959999998 podStartE2EDuration="2.533185413s" podCreationTimestamp="2026-01-23 07:23:21 +0000 UTC" firstStartedPulling="2026-01-23 07:23:22.66244754 +0000 UTC m=+3002.466214193" lastFinishedPulling="2026-01-23 07:23:22.745734657 +0000 UTC m=+3002.549501310" observedRunningTime="2026-01-23 07:23:23.531499887 +0000 UTC m=+3003.335266540" watchObservedRunningTime="2026-01-23 07:23:23.533185413 +0000 UTC m=+3003.336952066" Jan 23 07:23:23 crc kubenswrapper[4937]: I0123 07:23:23.571055 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.349925876 podStartE2EDuration="2.571028137s" podCreationTimestamp="2026-01-23 07:23:21 +0000 UTC" firstStartedPulling="2026-01-23 07:23:22.337838801 +0000 UTC m=+3002.141605464" lastFinishedPulling="2026-01-23 07:23:22.558941072 +0000 UTC m=+3002.362707725" observedRunningTime="2026-01-23 07:23:23.562540395 +0000 UTC m=+3003.366307048" watchObservedRunningTime="2026-01-23 07:23:23.571028137 +0000 UTC m=+3003.374794790" Jan 23 07:23:25 crc kubenswrapper[4937]: I0123 07:23:25.536993 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-snb2n"] Jan 23 07:23:25 crc kubenswrapper[4937]: I0123 07:23:25.540318 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snb2n" Jan 23 07:23:25 crc kubenswrapper[4937]: I0123 07:23:25.553167 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-snb2n"] Jan 23 07:23:25 crc kubenswrapper[4937]: I0123 07:23:25.619136 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gp7c\" (UniqueName: \"kubernetes.io/projected/42a3fff4-b125-4026-b96e-67149dcf6b28-kube-api-access-9gp7c\") pod \"community-operators-snb2n\" (UID: \"42a3fff4-b125-4026-b96e-67149dcf6b28\") " pod="openshift-marketplace/community-operators-snb2n" Jan 23 07:23:25 crc kubenswrapper[4937]: I0123 07:23:25.619215 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a3fff4-b125-4026-b96e-67149dcf6b28-utilities\") pod \"community-operators-snb2n\" (UID: \"42a3fff4-b125-4026-b96e-67149dcf6b28\") " pod="openshift-marketplace/community-operators-snb2n" Jan 23 07:23:25 crc kubenswrapper[4937]: I0123 07:23:25.619469 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a3fff4-b125-4026-b96e-67149dcf6b28-catalog-content\") pod \"community-operators-snb2n\" (UID: \"42a3fff4-b125-4026-b96e-67149dcf6b28\") " pod="openshift-marketplace/community-operators-snb2n" Jan 23 07:23:25 crc kubenswrapper[4937]: I0123 07:23:25.721771 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gp7c\" (UniqueName: \"kubernetes.io/projected/42a3fff4-b125-4026-b96e-67149dcf6b28-kube-api-access-9gp7c\") pod \"community-operators-snb2n\" (UID: \"42a3fff4-b125-4026-b96e-67149dcf6b28\") " pod="openshift-marketplace/community-operators-snb2n" Jan 23 07:23:25 crc kubenswrapper[4937]: I0123 07:23:25.721841 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a3fff4-b125-4026-b96e-67149dcf6b28-utilities\") pod \"community-operators-snb2n\" (UID: \"42a3fff4-b125-4026-b96e-67149dcf6b28\") " pod="openshift-marketplace/community-operators-snb2n" Jan 23 07:23:25 crc kubenswrapper[4937]: I0123 07:23:25.721944 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a3fff4-b125-4026-b96e-67149dcf6b28-catalog-content\") pod \"community-operators-snb2n\" (UID: \"42a3fff4-b125-4026-b96e-67149dcf6b28\") " pod="openshift-marketplace/community-operators-snb2n" Jan 23 07:23:25 crc kubenswrapper[4937]: I0123 07:23:25.722653 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a3fff4-b125-4026-b96e-67149dcf6b28-catalog-content\") pod \"community-operators-snb2n\" (UID: \"42a3fff4-b125-4026-b96e-67149dcf6b28\") " pod="openshift-marketplace/community-operators-snb2n" Jan 23 07:23:25 crc kubenswrapper[4937]: I0123 07:23:25.723421 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a3fff4-b125-4026-b96e-67149dcf6b28-utilities\") pod \"community-operators-snb2n\" (UID: \"42a3fff4-b125-4026-b96e-67149dcf6b28\") " pod="openshift-marketplace/community-operators-snb2n" Jan 23 07:23:25 crc kubenswrapper[4937]: I0123 07:23:25.759674 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gp7c\" (UniqueName: \"kubernetes.io/projected/42a3fff4-b125-4026-b96e-67149dcf6b28-kube-api-access-9gp7c\") pod \"community-operators-snb2n\" (UID: \"42a3fff4-b125-4026-b96e-67149dcf6b28\") " pod="openshift-marketplace/community-operators-snb2n" Jan 23 07:23:25 crc kubenswrapper[4937]: I0123 07:23:25.870978 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snb2n" Jan 23 07:23:26 crc kubenswrapper[4937]: I0123 07:23:26.356604 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-snb2n"] Jan 23 07:23:26 crc kubenswrapper[4937]: I0123 07:23:26.575136 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snb2n" event={"ID":"42a3fff4-b125-4026-b96e-67149dcf6b28","Type":"ContainerStarted","Data":"718bd2755a8528d87197a5b06e5223660b5b84c846ee3a0494a5fc6b2e55acdf"} Jan 23 07:23:26 crc kubenswrapper[4937]: I0123 07:23:26.745150 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 23 07:23:26 crc kubenswrapper[4937]: I0123 07:23:26.822307 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:26 crc kubenswrapper[4937]: I0123 07:23:26.877927 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:27 crc kubenswrapper[4937]: I0123 07:23:27.584124 4937 generic.go:334] "Generic (PLEG): container finished" podID="42a3fff4-b125-4026-b96e-67149dcf6b28" containerID="d9942f883073f2cf3b83414e42d97373a227e7b451047abebbea28843ba647dc" exitCode=0 Jan 23 07:23:27 crc kubenswrapper[4937]: I0123 07:23:27.584195 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snb2n" event={"ID":"42a3fff4-b125-4026-b96e-67149dcf6b28","Type":"ContainerDied","Data":"d9942f883073f2cf3b83414e42d97373a227e7b451047abebbea28843ba647dc"} Jan 23 07:23:28 crc kubenswrapper[4937]: I0123 07:23:28.600270 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snb2n" event={"ID":"42a3fff4-b125-4026-b96e-67149dcf6b28","Type":"ContainerStarted","Data":"e851448dcab626d5f7d1014f7eb74bc4024ece654ba3ebf54d635e885bfafccf"} Jan 23 07:23:29 crc kubenswrapper[4937]: I0123 07:23:29.612541 4937 generic.go:334] "Generic (PLEG): container finished" podID="42a3fff4-b125-4026-b96e-67149dcf6b28" containerID="e851448dcab626d5f7d1014f7eb74bc4024ece654ba3ebf54d635e885bfafccf" exitCode=0 Jan 23 07:23:29 crc kubenswrapper[4937]: I0123 07:23:29.612750 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snb2n" event={"ID":"42a3fff4-b125-4026-b96e-67149dcf6b28","Type":"ContainerDied","Data":"e851448dcab626d5f7d1014f7eb74bc4024ece654ba3ebf54d635e885bfafccf"} Jan 23 07:23:30 crc kubenswrapper[4937]: I0123 07:23:30.623806 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snb2n" event={"ID":"42a3fff4-b125-4026-b96e-67149dcf6b28","Type":"ContainerStarted","Data":"d23fde4ab85ed396bf7d1a7277289765eca09c45ee87c88d440b01f3a249c08e"} Jan 23 07:23:30 crc kubenswrapper[4937]: I0123 07:23:30.639578 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-snb2n" podStartSLOduration=3.110372851 podStartE2EDuration="5.63956099s" podCreationTimestamp="2026-01-23 07:23:25 +0000 UTC" firstStartedPulling="2026-01-23 07:23:27.588277935 +0000 UTC m=+3007.392044588" lastFinishedPulling="2026-01-23 07:23:30.117466074 +0000 UTC m=+3009.921232727" observedRunningTime="2026-01-23 07:23:30.638777488 +0000 UTC m=+3010.442544141" watchObservedRunningTime="2026-01-23 07:23:30.63956099 +0000 UTC m=+3010.443327643" Jan 23 07:23:32 crc kubenswrapper[4937]: I0123 07:23:32.076980 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 23 07:23:32 crc kubenswrapper[4937]: I0123 07:23:32.128525 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Jan 23 07:23:32 crc kubenswrapper[4937]: I0123 07:23:32.309833 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Jan 23 07:23:35 crc kubenswrapper[4937]: I0123 07:23:35.872263 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-snb2n" Jan 23 07:23:35 crc kubenswrapper[4937]: I0123 07:23:35.873014 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-snb2n" Jan 23 07:23:35 crc kubenswrapper[4937]: I0123 07:23:35.965623 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-snb2n" Jan 23 07:23:36 crc kubenswrapper[4937]: I0123 07:23:36.744855 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-snb2n" Jan 23 07:23:36 crc kubenswrapper[4937]: I0123 07:23:36.817693 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-snb2n"] Jan 23 07:23:38 crc kubenswrapper[4937]: I0123 07:23:38.698491 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-snb2n" podUID="42a3fff4-b125-4026-b96e-67149dcf6b28" containerName="registry-server" containerID="cri-o://d23fde4ab85ed396bf7d1a7277289765eca09c45ee87c88d440b01f3a249c08e" gracePeriod=2 Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.204025 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snb2n" Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.373372 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a3fff4-b125-4026-b96e-67149dcf6b28-catalog-content\") pod \"42a3fff4-b125-4026-b96e-67149dcf6b28\" (UID: \"42a3fff4-b125-4026-b96e-67149dcf6b28\") " Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.373641 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gp7c\" (UniqueName: \"kubernetes.io/projected/42a3fff4-b125-4026-b96e-67149dcf6b28-kube-api-access-9gp7c\") pod \"42a3fff4-b125-4026-b96e-67149dcf6b28\" (UID: \"42a3fff4-b125-4026-b96e-67149dcf6b28\") " Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.373758 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a3fff4-b125-4026-b96e-67149dcf6b28-utilities\") pod \"42a3fff4-b125-4026-b96e-67149dcf6b28\" (UID: \"42a3fff4-b125-4026-b96e-67149dcf6b28\") " Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.374528 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42a3fff4-b125-4026-b96e-67149dcf6b28-utilities" (OuterVolumeSpecName: "utilities") pod "42a3fff4-b125-4026-b96e-67149dcf6b28" (UID: "42a3fff4-b125-4026-b96e-67149dcf6b28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.385068 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a3fff4-b125-4026-b96e-67149dcf6b28-kube-api-access-9gp7c" (OuterVolumeSpecName: "kube-api-access-9gp7c") pod "42a3fff4-b125-4026-b96e-67149dcf6b28" (UID: "42a3fff4-b125-4026-b96e-67149dcf6b28"). InnerVolumeSpecName "kube-api-access-9gp7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.476133 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gp7c\" (UniqueName: \"kubernetes.io/projected/42a3fff4-b125-4026-b96e-67149dcf6b28-kube-api-access-9gp7c\") on node \"crc\" DevicePath \"\"" Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.476379 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42a3fff4-b125-4026-b96e-67149dcf6b28-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.549110 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42a3fff4-b125-4026-b96e-67149dcf6b28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42a3fff4-b125-4026-b96e-67149dcf6b28" (UID: "42a3fff4-b125-4026-b96e-67149dcf6b28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.578298 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42a3fff4-b125-4026-b96e-67149dcf6b28-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.709577 4937 generic.go:334] "Generic (PLEG): container finished" podID="42a3fff4-b125-4026-b96e-67149dcf6b28" containerID="d23fde4ab85ed396bf7d1a7277289765eca09c45ee87c88d440b01f3a249c08e" exitCode=0 Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.709641 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snb2n" event={"ID":"42a3fff4-b125-4026-b96e-67149dcf6b28","Type":"ContainerDied","Data":"d23fde4ab85ed396bf7d1a7277289765eca09c45ee87c88d440b01f3a249c08e"} Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.709652 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snb2n" Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.709674 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snb2n" event={"ID":"42a3fff4-b125-4026-b96e-67149dcf6b28","Type":"ContainerDied","Data":"718bd2755a8528d87197a5b06e5223660b5b84c846ee3a0494a5fc6b2e55acdf"} Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.709692 4937 scope.go:117] "RemoveContainer" containerID="d23fde4ab85ed396bf7d1a7277289765eca09c45ee87c88d440b01f3a249c08e" Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.778891 4937 scope.go:117] "RemoveContainer" containerID="e851448dcab626d5f7d1014f7eb74bc4024ece654ba3ebf54d635e885bfafccf" Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.806897 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-snb2n"] Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.842980 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-snb2n"] Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.849781 4937 scope.go:117] "RemoveContainer" containerID="d9942f883073f2cf3b83414e42d97373a227e7b451047abebbea28843ba647dc" Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.908427 4937 scope.go:117] "RemoveContainer" containerID="d23fde4ab85ed396bf7d1a7277289765eca09c45ee87c88d440b01f3a249c08e" Jan 23 07:23:39 crc kubenswrapper[4937]: E0123 07:23:39.909073 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d23fde4ab85ed396bf7d1a7277289765eca09c45ee87c88d440b01f3a249c08e\": container with ID starting with d23fde4ab85ed396bf7d1a7277289765eca09c45ee87c88d440b01f3a249c08e not found: ID does not exist" containerID="d23fde4ab85ed396bf7d1a7277289765eca09c45ee87c88d440b01f3a249c08e" Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.909134 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d23fde4ab85ed396bf7d1a7277289765eca09c45ee87c88d440b01f3a249c08e"} err="failed to get container status \"d23fde4ab85ed396bf7d1a7277289765eca09c45ee87c88d440b01f3a249c08e\": rpc error: code = NotFound desc = could not find container \"d23fde4ab85ed396bf7d1a7277289765eca09c45ee87c88d440b01f3a249c08e\": container with ID starting with d23fde4ab85ed396bf7d1a7277289765eca09c45ee87c88d440b01f3a249c08e not found: ID does not exist" Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.909165 4937 scope.go:117] "RemoveContainer" containerID="e851448dcab626d5f7d1014f7eb74bc4024ece654ba3ebf54d635e885bfafccf" Jan 23 07:23:39 crc kubenswrapper[4937]: E0123 07:23:39.913748 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e851448dcab626d5f7d1014f7eb74bc4024ece654ba3ebf54d635e885bfafccf\": container with ID starting with e851448dcab626d5f7d1014f7eb74bc4024ece654ba3ebf54d635e885bfafccf not found: ID does not exist" containerID="e851448dcab626d5f7d1014f7eb74bc4024ece654ba3ebf54d635e885bfafccf" Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.913822 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e851448dcab626d5f7d1014f7eb74bc4024ece654ba3ebf54d635e885bfafccf"} err="failed to get container status \"e851448dcab626d5f7d1014f7eb74bc4024ece654ba3ebf54d635e885bfafccf\": rpc error: code = NotFound desc = could not find container \"e851448dcab626d5f7d1014f7eb74bc4024ece654ba3ebf54d635e885bfafccf\": container with ID starting with e851448dcab626d5f7d1014f7eb74bc4024ece654ba3ebf54d635e885bfafccf not found: ID does not exist" Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.913853 4937 scope.go:117] "RemoveContainer" containerID="d9942f883073f2cf3b83414e42d97373a227e7b451047abebbea28843ba647dc" Jan 23 07:23:39 crc kubenswrapper[4937]: E0123 07:23:39.915137 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9942f883073f2cf3b83414e42d97373a227e7b451047abebbea28843ba647dc\": container with ID starting with d9942f883073f2cf3b83414e42d97373a227e7b451047abebbea28843ba647dc not found: ID does not exist" containerID="d9942f883073f2cf3b83414e42d97373a227e7b451047abebbea28843ba647dc" Jan 23 07:23:39 crc kubenswrapper[4937]: I0123 07:23:39.915179 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9942f883073f2cf3b83414e42d97373a227e7b451047abebbea28843ba647dc"} err="failed to get container status \"d9942f883073f2cf3b83414e42d97373a227e7b451047abebbea28843ba647dc\": rpc error: code = NotFound desc = could not find container \"d9942f883073f2cf3b83414e42d97373a227e7b451047abebbea28843ba647dc\": container with ID starting with d9942f883073f2cf3b83414e42d97373a227e7b451047abebbea28843ba647dc not found: ID does not exist" Jan 23 07:23:40 crc kubenswrapper[4937]: I0123 07:23:40.551450 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a3fff4-b125-4026-b96e-67149dcf6b28" path="/var/lib/kubelet/pods/42a3fff4-b125-4026-b96e-67149dcf6b28/volumes" Jan 23 07:24:23 crc kubenswrapper[4937]: I0123 07:24:23.650257 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 07:24:23 crc kubenswrapper[4937]: I0123 07:24:23.651628 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="721219b7-bb75-49ef-8bbc-09ca3e7eb76f" containerName="prometheus" containerID="cri-o://7eefb9ddb5f340653c16c1993accd8663d5abf88cda48eacd754eb67bad3619e" gracePeriod=600 Jan 23 07:24:23 crc kubenswrapper[4937]: I0123 07:24:23.651725 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="721219b7-bb75-49ef-8bbc-09ca3e7eb76f" containerName="config-reloader" containerID="cri-o://a6f431af3f2c20b47757726c1fcd8ad5824a370abd46704b435974847c67a6d8" gracePeriod=600 Jan 23 07:24:23 crc kubenswrapper[4937]: I0123 07:24:23.651719 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="721219b7-bb75-49ef-8bbc-09ca3e7eb76f" containerName="thanos-sidecar" containerID="cri-o://7827ff53134da6accbd4ffe9c461824d2e44dc16226690833e115d309e7547e7" gracePeriod=600 Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.249325 4937 generic.go:334] "Generic (PLEG): container finished" podID="721219b7-bb75-49ef-8bbc-09ca3e7eb76f" containerID="7827ff53134da6accbd4ffe9c461824d2e44dc16226690833e115d309e7547e7" exitCode=0 Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.249370 4937 generic.go:334] "Generic (PLEG): container finished" podID="721219b7-bb75-49ef-8bbc-09ca3e7eb76f" containerID="7eefb9ddb5f340653c16c1993accd8663d5abf88cda48eacd754eb67bad3619e" exitCode=0 Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.249397 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"721219b7-bb75-49ef-8bbc-09ca3e7eb76f","Type":"ContainerDied","Data":"7827ff53134da6accbd4ffe9c461824d2e44dc16226690833e115d309e7547e7"} Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.249429 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"721219b7-bb75-49ef-8bbc-09ca3e7eb76f","Type":"ContainerDied","Data":"7eefb9ddb5f340653c16c1993accd8663d5abf88cda48eacd754eb67bad3619e"} Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.611156 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.715421 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-prometheus-metric-storage-rulefiles-0\") pod \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.715496 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.715563 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-config-out\") pod \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.715624 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n4sx\" (UniqueName: \"kubernetes.io/projected/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-kube-api-access-7n4sx\") pod \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.715653 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-thanos-prometheus-http-client-file\") pod \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.716111 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\") pod \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.716169 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-prometheus-metric-storage-rulefiles-2\") pod \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.716219 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-config\") pod \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.716245 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-prometheus-metric-storage-rulefiles-1\") pod \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.716282 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-web-config\") pod \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.716317 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.716354 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-tls-assets\") pod \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.716401 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "721219b7-bb75-49ef-8bbc-09ca3e7eb76f" (UID: "721219b7-bb75-49ef-8bbc-09ca3e7eb76f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.716409 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-secret-combined-ca-bundle\") pod \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\" (UID: \"721219b7-bb75-49ef-8bbc-09ca3e7eb76f\") " Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.717012 4937 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.717476 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "721219b7-bb75-49ef-8bbc-09ca3e7eb76f" (UID: "721219b7-bb75-49ef-8bbc-09ca3e7eb76f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.717728 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "721219b7-bb75-49ef-8bbc-09ca3e7eb76f" (UID: "721219b7-bb75-49ef-8bbc-09ca3e7eb76f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.725095 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "721219b7-bb75-49ef-8bbc-09ca3e7eb76f" (UID: "721219b7-bb75-49ef-8bbc-09ca3e7eb76f"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.725291 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-config-out" (OuterVolumeSpecName: "config-out") pod "721219b7-bb75-49ef-8bbc-09ca3e7eb76f" (UID: "721219b7-bb75-49ef-8bbc-09ca3e7eb76f"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.726046 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "721219b7-bb75-49ef-8bbc-09ca3e7eb76f" (UID: "721219b7-bb75-49ef-8bbc-09ca3e7eb76f"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.735618 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "721219b7-bb75-49ef-8bbc-09ca3e7eb76f" (UID: "721219b7-bb75-49ef-8bbc-09ca3e7eb76f"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.745067 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-kube-api-access-7n4sx" (OuterVolumeSpecName: "kube-api-access-7n4sx") pod "721219b7-bb75-49ef-8bbc-09ca3e7eb76f" (UID: "721219b7-bb75-49ef-8bbc-09ca3e7eb76f"). InnerVolumeSpecName "kube-api-access-7n4sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.745124 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "721219b7-bb75-49ef-8bbc-09ca3e7eb76f" (UID: "721219b7-bb75-49ef-8bbc-09ca3e7eb76f"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.745166 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "721219b7-bb75-49ef-8bbc-09ca3e7eb76f" (UID: "721219b7-bb75-49ef-8bbc-09ca3e7eb76f"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.745758 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-config" (OuterVolumeSpecName: "config") pod "721219b7-bb75-49ef-8bbc-09ca3e7eb76f" (UID: "721219b7-bb75-49ef-8bbc-09ca3e7eb76f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.777027 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "721219b7-bb75-49ef-8bbc-09ca3e7eb76f" (UID: "721219b7-bb75-49ef-8bbc-09ca3e7eb76f"). InnerVolumeSpecName "pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.820001 4937 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.820041 4937 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.820056 4937 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.820068 4937 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.820078 4937 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.820086 4937 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.820094 4937 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-config-out\") on node \"crc\" DevicePath \"\"" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.820105 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n4sx\" (UniqueName: \"kubernetes.io/projected/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-kube-api-access-7n4sx\") on node \"crc\" DevicePath \"\"" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.820115 4937 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.820143 4937 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\") on node \"crc\" " Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.820153 4937 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.836216 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-web-config" (OuterVolumeSpecName: "web-config") pod "721219b7-bb75-49ef-8bbc-09ca3e7eb76f" (UID: "721219b7-bb75-49ef-8bbc-09ca3e7eb76f"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.858081 4937 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.858266 4937 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6") on node "crc" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.921836 4937 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/721219b7-bb75-49ef-8bbc-09ca3e7eb76f-web-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:24:24 crc kubenswrapper[4937]: I0123 07:24:24.921874 4937 reconciler_common.go:293] "Volume detached for volume \"pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\") on node \"crc\" DevicePath \"\"" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.261489 4937 generic.go:334] "Generic (PLEG): container finished" podID="721219b7-bb75-49ef-8bbc-09ca3e7eb76f" containerID="a6f431af3f2c20b47757726c1fcd8ad5824a370abd46704b435974847c67a6d8" exitCode=0 Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.261536 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"721219b7-bb75-49ef-8bbc-09ca3e7eb76f","Type":"ContainerDied","Data":"a6f431af3f2c20b47757726c1fcd8ad5824a370abd46704b435974847c67a6d8"} Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.261568 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"721219b7-bb75-49ef-8bbc-09ca3e7eb76f","Type":"ContainerDied","Data":"e0c09c71b2a0014a01ac24f03eccf5b9894bf0c6e617fa2f5de10dc582029b54"} Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.261606 4937 scope.go:117] "RemoveContainer" containerID="7827ff53134da6accbd4ffe9c461824d2e44dc16226690833e115d309e7547e7" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.261769 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.288026 4937 scope.go:117] "RemoveContainer" containerID="a6f431af3f2c20b47757726c1fcd8ad5824a370abd46704b435974847c67a6d8" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.321514 4937 scope.go:117] "RemoveContainer" containerID="7eefb9ddb5f340653c16c1993accd8663d5abf88cda48eacd754eb67bad3619e" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.329541 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.338493 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.350039 4937 scope.go:117] "RemoveContainer" containerID="96825496f88e9042d001820cd8e9f9be44a82698b5406a8f35dd4bd32a907bc2" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.365758 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 07:24:25 crc kubenswrapper[4937]: E0123 07:24:25.366457 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a3fff4-b125-4026-b96e-67149dcf6b28" containerName="extract-utilities" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.366478 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a3fff4-b125-4026-b96e-67149dcf6b28" containerName="extract-utilities" Jan 23 07:24:25 crc kubenswrapper[4937]: E0123 07:24:25.366503 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a3fff4-b125-4026-b96e-67149dcf6b28" containerName="extract-content" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.366511 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a3fff4-b125-4026-b96e-67149dcf6b28" containerName="extract-content" Jan 23 07:24:25 crc kubenswrapper[4937]: E0123 07:24:25.366521 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="721219b7-bb75-49ef-8bbc-09ca3e7eb76f" containerName="config-reloader" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.366528 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="721219b7-bb75-49ef-8bbc-09ca3e7eb76f" containerName="config-reloader" Jan 23 07:24:25 crc kubenswrapper[4937]: E0123 07:24:25.366553 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="721219b7-bb75-49ef-8bbc-09ca3e7eb76f" containerName="prometheus" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.366560 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="721219b7-bb75-49ef-8bbc-09ca3e7eb76f" containerName="prometheus" Jan 23 07:24:25 crc kubenswrapper[4937]: E0123 07:24:25.366576 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a3fff4-b125-4026-b96e-67149dcf6b28" containerName="registry-server" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.366583 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a3fff4-b125-4026-b96e-67149dcf6b28" containerName="registry-server" Jan 23 07:24:25 crc kubenswrapper[4937]: E0123 07:24:25.366626 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="721219b7-bb75-49ef-8bbc-09ca3e7eb76f" containerName="thanos-sidecar" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.366635 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="721219b7-bb75-49ef-8bbc-09ca3e7eb76f" containerName="thanos-sidecar" Jan 23 07:24:25 crc kubenswrapper[4937]: E0123 07:24:25.366656 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="721219b7-bb75-49ef-8bbc-09ca3e7eb76f" containerName="init-config-reloader" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.366664 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="721219b7-bb75-49ef-8bbc-09ca3e7eb76f" containerName="init-config-reloader" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.366922 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="721219b7-bb75-49ef-8bbc-09ca3e7eb76f" containerName="config-reloader" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.366947 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="721219b7-bb75-49ef-8bbc-09ca3e7eb76f" containerName="prometheus" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.366970 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="721219b7-bb75-49ef-8bbc-09ca3e7eb76f" containerName="thanos-sidecar" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.366988 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="42a3fff4-b125-4026-b96e-67149dcf6b28" containerName="registry-server" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.369080 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.372206 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.372610 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-zrrzk" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.372974 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.373061 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.372974 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.373016 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.373633 4937 scope.go:117] "RemoveContainer" containerID="7827ff53134da6accbd4ffe9c461824d2e44dc16226690833e115d309e7547e7" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.373836 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 23 07:24:25 crc kubenswrapper[4937]: E0123 07:24:25.375221 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7827ff53134da6accbd4ffe9c461824d2e44dc16226690833e115d309e7547e7\": container with ID starting with 7827ff53134da6accbd4ffe9c461824d2e44dc16226690833e115d309e7547e7 not found: ID does not exist" containerID="7827ff53134da6accbd4ffe9c461824d2e44dc16226690833e115d309e7547e7" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.375257 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7827ff53134da6accbd4ffe9c461824d2e44dc16226690833e115d309e7547e7"} err="failed to get container status \"7827ff53134da6accbd4ffe9c461824d2e44dc16226690833e115d309e7547e7\": rpc error: code = NotFound desc = could not find container \"7827ff53134da6accbd4ffe9c461824d2e44dc16226690833e115d309e7547e7\": container with ID starting with 7827ff53134da6accbd4ffe9c461824d2e44dc16226690833e115d309e7547e7 not found: ID does not exist" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.375289 4937 scope.go:117] "RemoveContainer" containerID="a6f431af3f2c20b47757726c1fcd8ad5824a370abd46704b435974847c67a6d8" Jan 23 07:24:25 crc kubenswrapper[4937]: E0123 07:24:25.376492 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6f431af3f2c20b47757726c1fcd8ad5824a370abd46704b435974847c67a6d8\": container with ID starting with a6f431af3f2c20b47757726c1fcd8ad5824a370abd46704b435974847c67a6d8 not found: ID does not exist" containerID="a6f431af3f2c20b47757726c1fcd8ad5824a370abd46704b435974847c67a6d8" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.376521 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6f431af3f2c20b47757726c1fcd8ad5824a370abd46704b435974847c67a6d8"} err="failed to get container status \"a6f431af3f2c20b47757726c1fcd8ad5824a370abd46704b435974847c67a6d8\": rpc error: code = NotFound desc = could not find container \"a6f431af3f2c20b47757726c1fcd8ad5824a370abd46704b435974847c67a6d8\": container with ID starting with a6f431af3f2c20b47757726c1fcd8ad5824a370abd46704b435974847c67a6d8 not found: ID does not exist" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.376548 4937 scope.go:117] "RemoveContainer" containerID="7eefb9ddb5f340653c16c1993accd8663d5abf88cda48eacd754eb67bad3619e" Jan 23 07:24:25 crc kubenswrapper[4937]: E0123 07:24:25.376845 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eefb9ddb5f340653c16c1993accd8663d5abf88cda48eacd754eb67bad3619e\": container with ID starting with 7eefb9ddb5f340653c16c1993accd8663d5abf88cda48eacd754eb67bad3619e not found: ID does not exist" containerID="7eefb9ddb5f340653c16c1993accd8663d5abf88cda48eacd754eb67bad3619e" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.376871 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eefb9ddb5f340653c16c1993accd8663d5abf88cda48eacd754eb67bad3619e"} err="failed to get container status \"7eefb9ddb5f340653c16c1993accd8663d5abf88cda48eacd754eb67bad3619e\": rpc error: code = NotFound desc = could not find container \"7eefb9ddb5f340653c16c1993accd8663d5abf88cda48eacd754eb67bad3619e\": container with ID starting with 7eefb9ddb5f340653c16c1993accd8663d5abf88cda48eacd754eb67bad3619e not found: ID does not exist" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.376887 4937 scope.go:117] "RemoveContainer" containerID="96825496f88e9042d001820cd8e9f9be44a82698b5406a8f35dd4bd32a907bc2" Jan 23 07:24:25 crc kubenswrapper[4937]: E0123 07:24:25.377147 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96825496f88e9042d001820cd8e9f9be44a82698b5406a8f35dd4bd32a907bc2\": container with ID starting with 96825496f88e9042d001820cd8e9f9be44a82698b5406a8f35dd4bd32a907bc2 not found: ID does not exist" containerID="96825496f88e9042d001820cd8e9f9be44a82698b5406a8f35dd4bd32a907bc2" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.377169 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96825496f88e9042d001820cd8e9f9be44a82698b5406a8f35dd4bd32a907bc2"} err="failed to get container status \"96825496f88e9042d001820cd8e9f9be44a82698b5406a8f35dd4bd32a907bc2\": rpc error: code = NotFound desc = could not find container \"96825496f88e9042d001820cd8e9f9be44a82698b5406a8f35dd4bd32a907bc2\": container with ID starting with 96825496f88e9042d001820cd8e9f9be44a82698b5406a8f35dd4bd32a907bc2 not found: ID does not exist" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.386147 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.399490 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.535346 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.535408 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.535431 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.535450 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.535473 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.535496 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.535528 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.535577 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.535628 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.535658 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67gk8\" (UniqueName: \"kubernetes.io/projected/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-kube-api-access-67gk8\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.535807 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.535870 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.535965 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-config\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.638061 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.638620 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-config\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.639580 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.640356 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.640664 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.641098 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.641494 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.641539 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.641575 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.642139 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.642245 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.641664 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.643431 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.643475 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.643523 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67gk8\" (UniqueName: \"kubernetes.io/projected/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-kube-api-access-67gk8\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.643728 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.644672 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-config\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.651468 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.652788 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.652888 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.658933 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.659203 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.659211 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.659539 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.662652 4937 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.662703 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ad88d819a9190e54c498f5e1a4ce0a9fbf70213240e060c76a145dc64e923a6c/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.663681 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67gk8\" (UniqueName: \"kubernetes.io/projected/7d6faa78-d867-4969-8d8f-c97f2bd9f2de-kube-api-access-67gk8\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:25 crc kubenswrapper[4937]: I0123 07:24:25.714723 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d1f5f39-7945-400d-aff1-4da23d4676b6\") pod \"prometheus-metric-storage-0\" (UID: \"7d6faa78-d867-4969-8d8f-c97f2bd9f2de\") " pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:26 crc kubenswrapper[4937]: I0123 07:24:26.020117 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:26 crc kubenswrapper[4937]: I0123 07:24:26.495355 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 07:24:26 crc kubenswrapper[4937]: W0123 07:24:26.496805 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d6faa78_d867_4969_8d8f_c97f2bd9f2de.slice/crio-40e0e491e4e359f7c07d674c4e85171f45382621e2b8fc8ffaa8e6ef0b440ef2 WatchSource:0}: Error finding container 40e0e491e4e359f7c07d674c4e85171f45382621e2b8fc8ffaa8e6ef0b440ef2: Status 404 returned error can't find the container with id 40e0e491e4e359f7c07d674c4e85171f45382621e2b8fc8ffaa8e6ef0b440ef2 Jan 23 07:24:26 crc kubenswrapper[4937]: I0123 07:24:26.545654 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="721219b7-bb75-49ef-8bbc-09ca3e7eb76f" path="/var/lib/kubelet/pods/721219b7-bb75-49ef-8bbc-09ca3e7eb76f/volumes" Jan 23 07:24:27 crc kubenswrapper[4937]: I0123 07:24:27.285534 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d6faa78-d867-4969-8d8f-c97f2bd9f2de","Type":"ContainerStarted","Data":"40e0e491e4e359f7c07d674c4e85171f45382621e2b8fc8ffaa8e6ef0b440ef2"} Jan 23 07:24:31 crc kubenswrapper[4937]: I0123 07:24:31.330804 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d6faa78-d867-4969-8d8f-c97f2bd9f2de","Type":"ContainerStarted","Data":"d36ed8490b930f5aa53b6844272b0f887a39e43e31490337472030556dc81cf6"} Jan 23 07:24:40 crc kubenswrapper[4937]: I0123 07:24:40.421346 4937 generic.go:334] "Generic (PLEG): container finished" podID="7d6faa78-d867-4969-8d8f-c97f2bd9f2de" containerID="d36ed8490b930f5aa53b6844272b0f887a39e43e31490337472030556dc81cf6" exitCode=0 Jan 23 07:24:40 crc kubenswrapper[4937]: I0123 07:24:40.421428 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d6faa78-d867-4969-8d8f-c97f2bd9f2de","Type":"ContainerDied","Data":"d36ed8490b930f5aa53b6844272b0f887a39e43e31490337472030556dc81cf6"} Jan 23 07:24:41 crc kubenswrapper[4937]: I0123 07:24:41.433167 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d6faa78-d867-4969-8d8f-c97f2bd9f2de","Type":"ContainerStarted","Data":"a4f348752accc40e6fd6f86b4ab7812a9a5adbf09900c686d2e2de41e0f39f0b"} Jan 23 07:24:44 crc kubenswrapper[4937]: I0123 07:24:44.464811 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d6faa78-d867-4969-8d8f-c97f2bd9f2de","Type":"ContainerStarted","Data":"fbea54ca9e1939564c4c82757e94d8c02423c846bfa1b78811a655132ee3144d"} Jan 23 07:24:45 crc kubenswrapper[4937]: I0123 07:24:45.477642 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7d6faa78-d867-4969-8d8f-c97f2bd9f2de","Type":"ContainerStarted","Data":"a2dc6628b02d16c2a98fb495503e689b4d66af7f274f5562d3335fc26e67fe7c"} Jan 23 07:24:45 crc kubenswrapper[4937]: I0123 07:24:45.548112 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.548095675 podStartE2EDuration="20.548095675s" podCreationTimestamp="2026-01-23 07:24:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:24:45.541619179 +0000 UTC m=+3085.345385832" watchObservedRunningTime="2026-01-23 07:24:45.548095675 +0000 UTC m=+3085.351862328" Jan 23 07:24:46 crc kubenswrapper[4937]: I0123 07:24:46.020983 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:49 crc kubenswrapper[4937]: I0123 07:24:49.099186 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fmc2f"] Jan 23 07:24:49 crc kubenswrapper[4937]: I0123 07:24:49.103216 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmc2f" Jan 23 07:24:49 crc kubenswrapper[4937]: I0123 07:24:49.122863 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmc2f"] Jan 23 07:24:49 crc kubenswrapper[4937]: I0123 07:24:49.175611 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xl9c\" (UniqueName: \"kubernetes.io/projected/83e1e46a-aee6-4e5e-95ec-48ad4549ef22-kube-api-access-9xl9c\") pod \"redhat-marketplace-fmc2f\" (UID: \"83e1e46a-aee6-4e5e-95ec-48ad4549ef22\") " pod="openshift-marketplace/redhat-marketplace-fmc2f" Jan 23 07:24:49 crc kubenswrapper[4937]: I0123 07:24:49.175877 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e1e46a-aee6-4e5e-95ec-48ad4549ef22-utilities\") pod \"redhat-marketplace-fmc2f\" (UID: \"83e1e46a-aee6-4e5e-95ec-48ad4549ef22\") " pod="openshift-marketplace/redhat-marketplace-fmc2f" Jan 23 07:24:49 crc kubenswrapper[4937]: I0123 07:24:49.176474 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e1e46a-aee6-4e5e-95ec-48ad4549ef22-catalog-content\") pod \"redhat-marketplace-fmc2f\" (UID: \"83e1e46a-aee6-4e5e-95ec-48ad4549ef22\") " pod="openshift-marketplace/redhat-marketplace-fmc2f" Jan 23 07:24:49 crc kubenswrapper[4937]: I0123 07:24:49.278487 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e1e46a-aee6-4e5e-95ec-48ad4549ef22-catalog-content\") pod \"redhat-marketplace-fmc2f\" (UID: \"83e1e46a-aee6-4e5e-95ec-48ad4549ef22\") " pod="openshift-marketplace/redhat-marketplace-fmc2f" Jan 23 07:24:49 crc kubenswrapper[4937]: I0123 07:24:49.278563 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xl9c\" (UniqueName: \"kubernetes.io/projected/83e1e46a-aee6-4e5e-95ec-48ad4549ef22-kube-api-access-9xl9c\") pod \"redhat-marketplace-fmc2f\" (UID: \"83e1e46a-aee6-4e5e-95ec-48ad4549ef22\") " pod="openshift-marketplace/redhat-marketplace-fmc2f" Jan 23 07:24:49 crc kubenswrapper[4937]: I0123 07:24:49.278634 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e1e46a-aee6-4e5e-95ec-48ad4549ef22-utilities\") pod \"redhat-marketplace-fmc2f\" (UID: \"83e1e46a-aee6-4e5e-95ec-48ad4549ef22\") " pod="openshift-marketplace/redhat-marketplace-fmc2f" Jan 23 07:24:49 crc kubenswrapper[4937]: I0123 07:24:49.279078 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e1e46a-aee6-4e5e-95ec-48ad4549ef22-catalog-content\") pod \"redhat-marketplace-fmc2f\" (UID: \"83e1e46a-aee6-4e5e-95ec-48ad4549ef22\") " pod="openshift-marketplace/redhat-marketplace-fmc2f" Jan 23 07:24:49 crc kubenswrapper[4937]: I0123 07:24:49.279112 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e1e46a-aee6-4e5e-95ec-48ad4549ef22-utilities\") pod \"redhat-marketplace-fmc2f\" (UID: \"83e1e46a-aee6-4e5e-95ec-48ad4549ef22\") " pod="openshift-marketplace/redhat-marketplace-fmc2f" Jan 23 07:24:49 crc kubenswrapper[4937]: I0123 07:24:49.311684 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xl9c\" (UniqueName: \"kubernetes.io/projected/83e1e46a-aee6-4e5e-95ec-48ad4549ef22-kube-api-access-9xl9c\") pod \"redhat-marketplace-fmc2f\" (UID: \"83e1e46a-aee6-4e5e-95ec-48ad4549ef22\") " pod="openshift-marketplace/redhat-marketplace-fmc2f" Jan 23 07:24:49 crc kubenswrapper[4937]: I0123 07:24:49.425570 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmc2f" Jan 23 07:24:49 crc kubenswrapper[4937]: I0123 07:24:49.929734 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmc2f"] Jan 23 07:24:50 crc kubenswrapper[4937]: I0123 07:24:50.545027 4937 generic.go:334] "Generic (PLEG): container finished" podID="83e1e46a-aee6-4e5e-95ec-48ad4549ef22" containerID="8deebd1d0821a8829aa4f713a535ec56b46567e0ef32e7f42a675892ca79f1c7" exitCode=0 Jan 23 07:24:50 crc kubenswrapper[4937]: I0123 07:24:50.546473 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmc2f" event={"ID":"83e1e46a-aee6-4e5e-95ec-48ad4549ef22","Type":"ContainerDied","Data":"8deebd1d0821a8829aa4f713a535ec56b46567e0ef32e7f42a675892ca79f1c7"} Jan 23 07:24:50 crc kubenswrapper[4937]: I0123 07:24:50.546505 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmc2f" event={"ID":"83e1e46a-aee6-4e5e-95ec-48ad4549ef22","Type":"ContainerStarted","Data":"5425236df19e3b9c3670f3d57f71c808e9847896e38af7b4c629dd33223c0f80"} Jan 23 07:24:51 crc kubenswrapper[4937]: I0123 07:24:51.559014 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmc2f" event={"ID":"83e1e46a-aee6-4e5e-95ec-48ad4549ef22","Type":"ContainerDied","Data":"7c9181f12b786e7debc52ab250461a865ee0cf762964d09a497c6accd9d5edc2"} Jan 23 07:24:51 crc kubenswrapper[4937]: I0123 07:24:51.561110 4937 generic.go:334] "Generic (PLEG): container finished" podID="83e1e46a-aee6-4e5e-95ec-48ad4549ef22" containerID="7c9181f12b786e7debc52ab250461a865ee0cf762964d09a497c6accd9d5edc2" exitCode=0 Jan 23 07:24:52 crc kubenswrapper[4937]: I0123 07:24:52.574894 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmc2f" event={"ID":"83e1e46a-aee6-4e5e-95ec-48ad4549ef22","Type":"ContainerStarted","Data":"3dd784b976ae5bcb797448667499b059c6c8f11ecca0cbf84f38600f394b26d2"} Jan 23 07:24:52 crc kubenswrapper[4937]: I0123 07:24:52.600756 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fmc2f" podStartSLOduration=2.148576154 podStartE2EDuration="3.600737833s" podCreationTimestamp="2026-01-23 07:24:49 +0000 UTC" firstStartedPulling="2026-01-23 07:24:50.549096594 +0000 UTC m=+3090.352863257" lastFinishedPulling="2026-01-23 07:24:52.001258273 +0000 UTC m=+3091.805024936" observedRunningTime="2026-01-23 07:24:52.597121805 +0000 UTC m=+3092.400888458" watchObservedRunningTime="2026-01-23 07:24:52.600737833 +0000 UTC m=+3092.404504486" Jan 23 07:24:56 crc kubenswrapper[4937]: I0123 07:24:56.021343 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:56 crc kubenswrapper[4937]: I0123 07:24:56.036124 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:56 crc kubenswrapper[4937]: I0123 07:24:56.631690 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 23 07:24:59 crc kubenswrapper[4937]: I0123 07:24:59.425692 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fmc2f" Jan 23 07:24:59 crc kubenswrapper[4937]: I0123 07:24:59.426120 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fmc2f" Jan 23 07:24:59 crc kubenswrapper[4937]: I0123 07:24:59.495193 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fmc2f" Jan 23 07:24:59 crc kubenswrapper[4937]: I0123 07:24:59.718726 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fmc2f" Jan 23 07:24:59 crc kubenswrapper[4937]: I0123 07:24:59.789991 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmc2f"] Jan 23 07:25:01 crc kubenswrapper[4937]: I0123 07:25:01.681820 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fmc2f" podUID="83e1e46a-aee6-4e5e-95ec-48ad4549ef22" containerName="registry-server" containerID="cri-o://3dd784b976ae5bcb797448667499b059c6c8f11ecca0cbf84f38600f394b26d2" gracePeriod=2 Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.195616 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmc2f" Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.369692 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e1e46a-aee6-4e5e-95ec-48ad4549ef22-utilities\") pod \"83e1e46a-aee6-4e5e-95ec-48ad4549ef22\" (UID: \"83e1e46a-aee6-4e5e-95ec-48ad4549ef22\") " Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.370274 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e1e46a-aee6-4e5e-95ec-48ad4549ef22-catalog-content\") pod \"83e1e46a-aee6-4e5e-95ec-48ad4549ef22\" (UID: \"83e1e46a-aee6-4e5e-95ec-48ad4549ef22\") " Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.370426 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xl9c\" (UniqueName: \"kubernetes.io/projected/83e1e46a-aee6-4e5e-95ec-48ad4549ef22-kube-api-access-9xl9c\") pod \"83e1e46a-aee6-4e5e-95ec-48ad4549ef22\" (UID: \"83e1e46a-aee6-4e5e-95ec-48ad4549ef22\") " Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.370951 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83e1e46a-aee6-4e5e-95ec-48ad4549ef22-utilities" (OuterVolumeSpecName: "utilities") pod "83e1e46a-aee6-4e5e-95ec-48ad4549ef22" (UID: "83e1e46a-aee6-4e5e-95ec-48ad4549ef22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.379789 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e1e46a-aee6-4e5e-95ec-48ad4549ef22-kube-api-access-9xl9c" (OuterVolumeSpecName: "kube-api-access-9xl9c") pod "83e1e46a-aee6-4e5e-95ec-48ad4549ef22" (UID: "83e1e46a-aee6-4e5e-95ec-48ad4549ef22"). InnerVolumeSpecName "kube-api-access-9xl9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.402447 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83e1e46a-aee6-4e5e-95ec-48ad4549ef22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83e1e46a-aee6-4e5e-95ec-48ad4549ef22" (UID: "83e1e46a-aee6-4e5e-95ec-48ad4549ef22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.473167 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83e1e46a-aee6-4e5e-95ec-48ad4549ef22-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.473213 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83e1e46a-aee6-4e5e-95ec-48ad4549ef22-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.473231 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xl9c\" (UniqueName: \"kubernetes.io/projected/83e1e46a-aee6-4e5e-95ec-48ad4549ef22-kube-api-access-9xl9c\") on node \"crc\" DevicePath \"\"" Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.694130 4937 generic.go:334] "Generic (PLEG): container finished" podID="83e1e46a-aee6-4e5e-95ec-48ad4549ef22" containerID="3dd784b976ae5bcb797448667499b059c6c8f11ecca0cbf84f38600f394b26d2" exitCode=0 Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.694213 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fmc2f" Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.694241 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmc2f" event={"ID":"83e1e46a-aee6-4e5e-95ec-48ad4549ef22","Type":"ContainerDied","Data":"3dd784b976ae5bcb797448667499b059c6c8f11ecca0cbf84f38600f394b26d2"} Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.695934 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fmc2f" event={"ID":"83e1e46a-aee6-4e5e-95ec-48ad4549ef22","Type":"ContainerDied","Data":"5425236df19e3b9c3670f3d57f71c808e9847896e38af7b4c629dd33223c0f80"} Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.695962 4937 scope.go:117] "RemoveContainer" containerID="3dd784b976ae5bcb797448667499b059c6c8f11ecca0cbf84f38600f394b26d2" Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.723061 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmc2f"] Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.725528 4937 scope.go:117] "RemoveContainer" containerID="7c9181f12b786e7debc52ab250461a865ee0cf762964d09a497c6accd9d5edc2" Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.732022 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fmc2f"] Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.747310 4937 scope.go:117] "RemoveContainer" containerID="8deebd1d0821a8829aa4f713a535ec56b46567e0ef32e7f42a675892ca79f1c7" Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.792436 4937 scope.go:117] "RemoveContainer" containerID="3dd784b976ae5bcb797448667499b059c6c8f11ecca0cbf84f38600f394b26d2" Jan 23 07:25:02 crc kubenswrapper[4937]: E0123 07:25:02.792825 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd784b976ae5bcb797448667499b059c6c8f11ecca0cbf84f38600f394b26d2\": container with ID starting with 3dd784b976ae5bcb797448667499b059c6c8f11ecca0cbf84f38600f394b26d2 not found: ID does not exist" containerID="3dd784b976ae5bcb797448667499b059c6c8f11ecca0cbf84f38600f394b26d2" Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.792858 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd784b976ae5bcb797448667499b059c6c8f11ecca0cbf84f38600f394b26d2"} err="failed to get container status \"3dd784b976ae5bcb797448667499b059c6c8f11ecca0cbf84f38600f394b26d2\": rpc error: code = NotFound desc = could not find container \"3dd784b976ae5bcb797448667499b059c6c8f11ecca0cbf84f38600f394b26d2\": container with ID starting with 3dd784b976ae5bcb797448667499b059c6c8f11ecca0cbf84f38600f394b26d2 not found: ID does not exist" Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.792888 4937 scope.go:117] "RemoveContainer" containerID="7c9181f12b786e7debc52ab250461a865ee0cf762964d09a497c6accd9d5edc2" Jan 23 07:25:02 crc kubenswrapper[4937]: E0123 07:25:02.793125 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c9181f12b786e7debc52ab250461a865ee0cf762964d09a497c6accd9d5edc2\": container with ID starting with 7c9181f12b786e7debc52ab250461a865ee0cf762964d09a497c6accd9d5edc2 not found: ID does not exist" containerID="7c9181f12b786e7debc52ab250461a865ee0cf762964d09a497c6accd9d5edc2" Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.793153 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c9181f12b786e7debc52ab250461a865ee0cf762964d09a497c6accd9d5edc2"} err="failed to get container status \"7c9181f12b786e7debc52ab250461a865ee0cf762964d09a497c6accd9d5edc2\": rpc error: code = NotFound desc = could not find container \"7c9181f12b786e7debc52ab250461a865ee0cf762964d09a497c6accd9d5edc2\": container with ID starting with 7c9181f12b786e7debc52ab250461a865ee0cf762964d09a497c6accd9d5edc2 not found: ID does not exist" Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.793179 4937 scope.go:117] "RemoveContainer" containerID="8deebd1d0821a8829aa4f713a535ec56b46567e0ef32e7f42a675892ca79f1c7" Jan 23 07:25:02 crc kubenswrapper[4937]: E0123 07:25:02.793495 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8deebd1d0821a8829aa4f713a535ec56b46567e0ef32e7f42a675892ca79f1c7\": container with ID starting with 8deebd1d0821a8829aa4f713a535ec56b46567e0ef32e7f42a675892ca79f1c7 not found: ID does not exist" containerID="8deebd1d0821a8829aa4f713a535ec56b46567e0ef32e7f42a675892ca79f1c7" Jan 23 07:25:02 crc kubenswrapper[4937]: I0123 07:25:02.793533 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8deebd1d0821a8829aa4f713a535ec56b46567e0ef32e7f42a675892ca79f1c7"} err="failed to get container status \"8deebd1d0821a8829aa4f713a535ec56b46567e0ef32e7f42a675892ca79f1c7\": rpc error: code = NotFound desc = could not find container \"8deebd1d0821a8829aa4f713a535ec56b46567e0ef32e7f42a675892ca79f1c7\": container with ID starting with 8deebd1d0821a8829aa4f713a535ec56b46567e0ef32e7f42a675892ca79f1c7 not found: ID does not exist" Jan 23 07:25:04 crc kubenswrapper[4937]: I0123 07:25:04.545196 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83e1e46a-aee6-4e5e-95ec-48ad4549ef22" path="/var/lib/kubelet/pods/83e1e46a-aee6-4e5e-95ec-48ad4549ef22/volumes" Jan 23 07:25:07 crc kubenswrapper[4937]: I0123 07:25:07.724767 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:25:07 crc kubenswrapper[4937]: I0123 07:25:07.725532 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.733994 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 07:25:16 crc kubenswrapper[4937]: E0123 07:25:16.734948 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e1e46a-aee6-4e5e-95ec-48ad4549ef22" containerName="extract-content" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.734964 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e1e46a-aee6-4e5e-95ec-48ad4549ef22" containerName="extract-content" Jan 23 07:25:16 crc kubenswrapper[4937]: E0123 07:25:16.734985 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e1e46a-aee6-4e5e-95ec-48ad4549ef22" containerName="extract-utilities" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.734991 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e1e46a-aee6-4e5e-95ec-48ad4549ef22" containerName="extract-utilities" Jan 23 07:25:16 crc kubenswrapper[4937]: E0123 07:25:16.735014 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e1e46a-aee6-4e5e-95ec-48ad4549ef22" containerName="registry-server" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.735021 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e1e46a-aee6-4e5e-95ec-48ad4549ef22" containerName="registry-server" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.735216 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="83e1e46a-aee6-4e5e-95ec-48ad4549ef22" containerName="registry-server" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.736310 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.741320 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.742242 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-h5f74" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.742485 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.742671 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.762668 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.798203 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b3373921-706d-4d27-a1c5-b8aaa6179a0f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.798260 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3373921-706d-4d27-a1c5-b8aaa6179a0f-config-data\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.798291 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b3373921-706d-4d27-a1c5-b8aaa6179a0f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.899863 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b3373921-706d-4d27-a1c5-b8aaa6179a0f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.899909 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b3373921-706d-4d27-a1c5-b8aaa6179a0f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.899952 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b3373921-706d-4d27-a1c5-b8aaa6179a0f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.899986 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3373921-706d-4d27-a1c5-b8aaa6179a0f-config-data\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.900051 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b3373921-706d-4d27-a1c5-b8aaa6179a0f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.900079 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b3373921-706d-4d27-a1c5-b8aaa6179a0f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.900116 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3373921-706d-4d27-a1c5-b8aaa6179a0f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.900133 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bp58\" (UniqueName: \"kubernetes.io/projected/b3373921-706d-4d27-a1c5-b8aaa6179a0f-kube-api-access-5bp58\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.900160 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.901199 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b3373921-706d-4d27-a1c5-b8aaa6179a0f-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.901921 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3373921-706d-4d27-a1c5-b8aaa6179a0f-config-data\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:16 crc kubenswrapper[4937]: I0123 07:25:16.905249 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b3373921-706d-4d27-a1c5-b8aaa6179a0f-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:17 crc kubenswrapper[4937]: I0123 07:25:17.003494 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b3373921-706d-4d27-a1c5-b8aaa6179a0f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:17 crc kubenswrapper[4937]: I0123 07:25:17.003680 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b3373921-706d-4d27-a1c5-b8aaa6179a0f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:17 crc kubenswrapper[4937]: I0123 07:25:17.003877 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b3373921-706d-4d27-a1c5-b8aaa6179a0f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:17 crc kubenswrapper[4937]: I0123 07:25:17.003942 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bp58\" (UniqueName: \"kubernetes.io/projected/b3373921-706d-4d27-a1c5-b8aaa6179a0f-kube-api-access-5bp58\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:17 crc kubenswrapper[4937]: I0123 07:25:17.003977 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3373921-706d-4d27-a1c5-b8aaa6179a0f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:17 crc kubenswrapper[4937]: I0123 07:25:17.004029 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:17 crc kubenswrapper[4937]: I0123 07:25:17.004223 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b3373921-706d-4d27-a1c5-b8aaa6179a0f-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:17 crc kubenswrapper[4937]: I0123 07:25:17.004535 4937 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest" Jan 23 07:25:17 crc kubenswrapper[4937]: I0123 07:25:17.004563 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b3373921-706d-4d27-a1c5-b8aaa6179a0f-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:17 crc kubenswrapper[4937]: I0123 07:25:17.008831 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b3373921-706d-4d27-a1c5-b8aaa6179a0f-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:17 crc kubenswrapper[4937]: I0123 07:25:17.009112 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3373921-706d-4d27-a1c5-b8aaa6179a0f-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:17 crc kubenswrapper[4937]: I0123 07:25:17.023956 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bp58\" (UniqueName: \"kubernetes.io/projected/b3373921-706d-4d27-a1c5-b8aaa6179a0f-kube-api-access-5bp58\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:17 crc kubenswrapper[4937]: I0123 07:25:17.043502 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"b3373921-706d-4d27-a1c5-b8aaa6179a0f\") " pod="openstack/tempest-tests-tempest" Jan 23 07:25:17 crc kubenswrapper[4937]: I0123 07:25:17.072928 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 07:25:17 crc kubenswrapper[4937]: I0123 07:25:17.559177 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 07:25:17 crc kubenswrapper[4937]: W0123 07:25:17.576325 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3373921_706d_4d27_a1c5_b8aaa6179a0f.slice/crio-c439983c9903e57ab3c9bc4c533452d175ae037cf6048a711a70ca308b7b6cd0 WatchSource:0}: Error finding container c439983c9903e57ab3c9bc4c533452d175ae037cf6048a711a70ca308b7b6cd0: Status 404 returned error can't find the container with id c439983c9903e57ab3c9bc4c533452d175ae037cf6048a711a70ca308b7b6cd0 Jan 23 07:25:17 crc kubenswrapper[4937]: I0123 07:25:17.908465 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b3373921-706d-4d27-a1c5-b8aaa6179a0f","Type":"ContainerStarted","Data":"c439983c9903e57ab3c9bc4c533452d175ae037cf6048a711a70ca308b7b6cd0"} Jan 23 07:25:28 crc kubenswrapper[4937]: I0123 07:25:28.442074 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 23 07:25:30 crc kubenswrapper[4937]: I0123 07:25:30.045168 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b3373921-706d-4d27-a1c5-b8aaa6179a0f","Type":"ContainerStarted","Data":"1f1815e9202adc928ca24581e817d419540324ab3aff57e51261d9ebc4e20027"} Jan 23 07:25:30 crc kubenswrapper[4937]: I0123 07:25:30.076644 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.21718272 podStartE2EDuration="15.076620786s" podCreationTimestamp="2026-01-23 07:25:15 +0000 UTC" firstStartedPulling="2026-01-23 07:25:17.579358697 +0000 UTC m=+3117.383125380" lastFinishedPulling="2026-01-23 07:25:28.438796793 +0000 UTC m=+3128.242563446" observedRunningTime="2026-01-23 07:25:30.068426732 +0000 UTC m=+3129.872193395" watchObservedRunningTime="2026-01-23 07:25:30.076620786 +0000 UTC m=+3129.880387439" Jan 23 07:25:37 crc kubenswrapper[4937]: I0123 07:25:37.724828 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:25:37 crc kubenswrapper[4937]: I0123 07:25:37.727134 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:26:07 crc kubenswrapper[4937]: I0123 07:26:07.724108 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:26:07 crc kubenswrapper[4937]: I0123 07:26:07.724793 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:26:07 crc kubenswrapper[4937]: I0123 07:26:07.724860 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 07:26:07 crc kubenswrapper[4937]: I0123 07:26:07.725939 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:26:07 crc kubenswrapper[4937]: I0123 07:26:07.726032 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" gracePeriod=600 Jan 23 07:26:07 crc kubenswrapper[4937]: E0123 07:26:07.871601 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:26:08 crc kubenswrapper[4937]: I0123 07:26:08.441127 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" exitCode=0 Jan 23 07:26:08 crc kubenswrapper[4937]: I0123 07:26:08.441171 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c"} Jan 23 07:26:08 crc kubenswrapper[4937]: I0123 07:26:08.441206 4937 scope.go:117] "RemoveContainer" containerID="4c7a1b80192dda9392c131e247b19f3c7c53deeda7bfef89da6fd5a6fe441dd0" Jan 23 07:26:08 crc kubenswrapper[4937]: I0123 07:26:08.441970 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:26:08 crc kubenswrapper[4937]: E0123 07:26:08.442449 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:26:19 crc kubenswrapper[4937]: I0123 07:26:19.526294 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:26:19 crc kubenswrapper[4937]: E0123 07:26:19.527152 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:26:32 crc kubenswrapper[4937]: I0123 07:26:32.527888 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:26:32 crc kubenswrapper[4937]: E0123 07:26:32.528644 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:26:43 crc kubenswrapper[4937]: I0123 07:26:43.527211 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:26:43 crc kubenswrapper[4937]: E0123 07:26:43.528138 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:26:56 crc kubenswrapper[4937]: I0123 07:26:56.527269 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:26:56 crc kubenswrapper[4937]: E0123 07:26:56.529094 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:27:09 crc kubenswrapper[4937]: I0123 07:27:09.527027 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:27:09 crc kubenswrapper[4937]: E0123 07:27:09.527950 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:27:20 crc kubenswrapper[4937]: I0123 07:27:20.534186 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:27:20 crc kubenswrapper[4937]: E0123 07:27:20.535148 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:27:32 crc kubenswrapper[4937]: I0123 07:27:32.526699 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:27:32 crc kubenswrapper[4937]: E0123 07:27:32.527883 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:27:46 crc kubenswrapper[4937]: I0123 07:27:46.527203 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:27:46 crc kubenswrapper[4937]: E0123 07:27:46.528089 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:27:58 crc kubenswrapper[4937]: I0123 07:27:58.527303 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:27:58 crc kubenswrapper[4937]: E0123 07:27:58.528245 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:28:13 crc kubenswrapper[4937]: I0123 07:28:13.527212 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:28:13 crc kubenswrapper[4937]: E0123 07:28:13.527908 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:28:26 crc kubenswrapper[4937]: I0123 07:28:26.526900 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:28:26 crc kubenswrapper[4937]: E0123 07:28:26.528290 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:28:37 crc kubenswrapper[4937]: I0123 07:28:37.528736 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:28:37 crc kubenswrapper[4937]: E0123 07:28:37.530648 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:28:51 crc kubenswrapper[4937]: I0123 07:28:51.526978 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:28:51 crc kubenswrapper[4937]: E0123 07:28:51.527890 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:29:06 crc kubenswrapper[4937]: I0123 07:29:06.526556 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:29:06 crc kubenswrapper[4937]: E0123 07:29:06.528305 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:29:20 crc kubenswrapper[4937]: I0123 07:29:20.539172 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:29:20 crc kubenswrapper[4937]: E0123 07:29:20.540164 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:29:31 crc kubenswrapper[4937]: I0123 07:29:31.853122 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k69gl"] Jan 23 07:29:31 crc kubenswrapper[4937]: I0123 07:29:31.861367 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k69gl" Jan 23 07:29:31 crc kubenswrapper[4937]: I0123 07:29:31.872060 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k69gl"] Jan 23 07:29:32 crc kubenswrapper[4937]: I0123 07:29:32.022038 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5cceff6-2e86-46e3-b986-6eddb345d60d-catalog-content\") pod \"redhat-operators-k69gl\" (UID: \"a5cceff6-2e86-46e3-b986-6eddb345d60d\") " pod="openshift-marketplace/redhat-operators-k69gl" Jan 23 07:29:32 crc kubenswrapper[4937]: I0123 07:29:32.022218 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q8kb\" (UniqueName: \"kubernetes.io/projected/a5cceff6-2e86-46e3-b986-6eddb345d60d-kube-api-access-9q8kb\") pod \"redhat-operators-k69gl\" (UID: \"a5cceff6-2e86-46e3-b986-6eddb345d60d\") " pod="openshift-marketplace/redhat-operators-k69gl" Jan 23 07:29:32 crc kubenswrapper[4937]: I0123 07:29:32.022357 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5cceff6-2e86-46e3-b986-6eddb345d60d-utilities\") pod \"redhat-operators-k69gl\" (UID: \"a5cceff6-2e86-46e3-b986-6eddb345d60d\") " pod="openshift-marketplace/redhat-operators-k69gl" Jan 23 07:29:32 crc kubenswrapper[4937]: I0123 07:29:32.124632 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5cceff6-2e86-46e3-b986-6eddb345d60d-catalog-content\") pod \"redhat-operators-k69gl\" (UID: \"a5cceff6-2e86-46e3-b986-6eddb345d60d\") " pod="openshift-marketplace/redhat-operators-k69gl" Jan 23 07:29:32 crc kubenswrapper[4937]: I0123 07:29:32.124791 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q8kb\" (UniqueName: \"kubernetes.io/projected/a5cceff6-2e86-46e3-b986-6eddb345d60d-kube-api-access-9q8kb\") pod \"redhat-operators-k69gl\" (UID: \"a5cceff6-2e86-46e3-b986-6eddb345d60d\") " pod="openshift-marketplace/redhat-operators-k69gl" Jan 23 07:29:32 crc kubenswrapper[4937]: I0123 07:29:32.124876 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5cceff6-2e86-46e3-b986-6eddb345d60d-utilities\") pod \"redhat-operators-k69gl\" (UID: \"a5cceff6-2e86-46e3-b986-6eddb345d60d\") " pod="openshift-marketplace/redhat-operators-k69gl" Jan 23 07:29:32 crc kubenswrapper[4937]: I0123 07:29:32.125286 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5cceff6-2e86-46e3-b986-6eddb345d60d-catalog-content\") pod \"redhat-operators-k69gl\" (UID: \"a5cceff6-2e86-46e3-b986-6eddb345d60d\") " pod="openshift-marketplace/redhat-operators-k69gl" Jan 23 07:29:32 crc kubenswrapper[4937]: I0123 07:29:32.125409 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5cceff6-2e86-46e3-b986-6eddb345d60d-utilities\") pod \"redhat-operators-k69gl\" (UID: \"a5cceff6-2e86-46e3-b986-6eddb345d60d\") " pod="openshift-marketplace/redhat-operators-k69gl" Jan 23 07:29:32 crc kubenswrapper[4937]: I0123 07:29:32.151395 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q8kb\" (UniqueName: \"kubernetes.io/projected/a5cceff6-2e86-46e3-b986-6eddb345d60d-kube-api-access-9q8kb\") pod \"redhat-operators-k69gl\" (UID: \"a5cceff6-2e86-46e3-b986-6eddb345d60d\") " pod="openshift-marketplace/redhat-operators-k69gl" Jan 23 07:29:32 crc kubenswrapper[4937]: I0123 07:29:32.188276 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k69gl" Jan 23 07:29:32 crc kubenswrapper[4937]: I0123 07:29:32.699550 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k69gl"] Jan 23 07:29:33 crc kubenswrapper[4937]: I0123 07:29:33.505492 4937 generic.go:334] "Generic (PLEG): container finished" podID="a5cceff6-2e86-46e3-b986-6eddb345d60d" containerID="74fd4b37d70da831e8ef79fb0b195cf3aca3f33c61105856251e63fd775672f6" exitCode=0 Jan 23 07:29:33 crc kubenswrapper[4937]: I0123 07:29:33.505565 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k69gl" event={"ID":"a5cceff6-2e86-46e3-b986-6eddb345d60d","Type":"ContainerDied","Data":"74fd4b37d70da831e8ef79fb0b195cf3aca3f33c61105856251e63fd775672f6"} Jan 23 07:29:33 crc kubenswrapper[4937]: I0123 07:29:33.505838 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k69gl" event={"ID":"a5cceff6-2e86-46e3-b986-6eddb345d60d","Type":"ContainerStarted","Data":"93070eebb323f5cbc2f4ebf10a2def396373055753d1df666d0333530d97c1c9"} Jan 23 07:29:33 crc kubenswrapper[4937]: I0123 07:29:33.509075 4937 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 07:29:33 crc kubenswrapper[4937]: I0123 07:29:33.526791 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:29:33 crc kubenswrapper[4937]: E0123 07:29:33.527236 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:29:34 crc kubenswrapper[4937]: I0123 07:29:34.539159 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k69gl" event={"ID":"a5cceff6-2e86-46e3-b986-6eddb345d60d","Type":"ContainerStarted","Data":"72a0b1902ee59c7f1a19a3d2d4c50b11fb638f1f92fa1639ac5082b31da6ba9a"} Jan 23 07:29:37 crc kubenswrapper[4937]: I0123 07:29:37.560794 4937 generic.go:334] "Generic (PLEG): container finished" podID="a5cceff6-2e86-46e3-b986-6eddb345d60d" containerID="72a0b1902ee59c7f1a19a3d2d4c50b11fb638f1f92fa1639ac5082b31da6ba9a" exitCode=0 Jan 23 07:29:37 crc kubenswrapper[4937]: I0123 07:29:37.561028 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k69gl" event={"ID":"a5cceff6-2e86-46e3-b986-6eddb345d60d","Type":"ContainerDied","Data":"72a0b1902ee59c7f1a19a3d2d4c50b11fb638f1f92fa1639ac5082b31da6ba9a"} Jan 23 07:29:38 crc kubenswrapper[4937]: I0123 07:29:38.573254 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k69gl" event={"ID":"a5cceff6-2e86-46e3-b986-6eddb345d60d","Type":"ContainerStarted","Data":"65a90f94db7138f977b9e7f992fd71570df125bb28fe3eee03e226386c0ce5a5"} Jan 23 07:29:38 crc kubenswrapper[4937]: I0123 07:29:38.592062 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k69gl" podStartSLOduration=3.105576776 podStartE2EDuration="7.592043967s" podCreationTimestamp="2026-01-23 07:29:31 +0000 UTC" firstStartedPulling="2026-01-23 07:29:33.508655714 +0000 UTC m=+3373.312422367" lastFinishedPulling="2026-01-23 07:29:37.995122915 +0000 UTC m=+3377.798889558" observedRunningTime="2026-01-23 07:29:38.590963177 +0000 UTC m=+3378.394729830" watchObservedRunningTime="2026-01-23 07:29:38.592043967 +0000 UTC m=+3378.395810620" Jan 23 07:29:42 crc kubenswrapper[4937]: I0123 07:29:42.188716 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k69gl" Jan 23 07:29:42 crc kubenswrapper[4937]: I0123 07:29:42.189406 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k69gl" Jan 23 07:29:43 crc kubenswrapper[4937]: I0123 07:29:43.236988 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k69gl" podUID="a5cceff6-2e86-46e3-b986-6eddb345d60d" containerName="registry-server" probeResult="failure" output=< Jan 23 07:29:43 crc kubenswrapper[4937]: timeout: failed to connect service ":50051" within 1s Jan 23 07:29:43 crc kubenswrapper[4937]: > Jan 23 07:29:44 crc kubenswrapper[4937]: I0123 07:29:44.617663 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rcw4f"] Jan 23 07:29:44 crc kubenswrapper[4937]: I0123 07:29:44.622537 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcw4f" Jan 23 07:29:44 crc kubenswrapper[4937]: I0123 07:29:44.635920 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rcw4f"] Jan 23 07:29:44 crc kubenswrapper[4937]: I0123 07:29:44.711726 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79mrp\" (UniqueName: \"kubernetes.io/projected/966b2713-8da2-44d3-b27d-9cf743de26ad-kube-api-access-79mrp\") pod \"certified-operators-rcw4f\" (UID: \"966b2713-8da2-44d3-b27d-9cf743de26ad\") " pod="openshift-marketplace/certified-operators-rcw4f" Jan 23 07:29:44 crc kubenswrapper[4937]: I0123 07:29:44.711790 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/966b2713-8da2-44d3-b27d-9cf743de26ad-catalog-content\") pod \"certified-operators-rcw4f\" (UID: \"966b2713-8da2-44d3-b27d-9cf743de26ad\") " pod="openshift-marketplace/certified-operators-rcw4f" Jan 23 07:29:44 crc kubenswrapper[4937]: I0123 07:29:44.712134 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/966b2713-8da2-44d3-b27d-9cf743de26ad-utilities\") pod \"certified-operators-rcw4f\" (UID: \"966b2713-8da2-44d3-b27d-9cf743de26ad\") " pod="openshift-marketplace/certified-operators-rcw4f" Jan 23 07:29:44 crc kubenswrapper[4937]: I0123 07:29:44.813845 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/966b2713-8da2-44d3-b27d-9cf743de26ad-utilities\") pod \"certified-operators-rcw4f\" (UID: \"966b2713-8da2-44d3-b27d-9cf743de26ad\") " pod="openshift-marketplace/certified-operators-rcw4f" Jan 23 07:29:44 crc kubenswrapper[4937]: I0123 07:29:44.814251 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79mrp\" (UniqueName: \"kubernetes.io/projected/966b2713-8da2-44d3-b27d-9cf743de26ad-kube-api-access-79mrp\") pod \"certified-operators-rcw4f\" (UID: \"966b2713-8da2-44d3-b27d-9cf743de26ad\") " pod="openshift-marketplace/certified-operators-rcw4f" Jan 23 07:29:44 crc kubenswrapper[4937]: I0123 07:29:44.814314 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/966b2713-8da2-44d3-b27d-9cf743de26ad-catalog-content\") pod \"certified-operators-rcw4f\" (UID: \"966b2713-8da2-44d3-b27d-9cf743de26ad\") " pod="openshift-marketplace/certified-operators-rcw4f" Jan 23 07:29:44 crc kubenswrapper[4937]: I0123 07:29:44.814433 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/966b2713-8da2-44d3-b27d-9cf743de26ad-utilities\") pod \"certified-operators-rcw4f\" (UID: \"966b2713-8da2-44d3-b27d-9cf743de26ad\") " pod="openshift-marketplace/certified-operators-rcw4f" Jan 23 07:29:44 crc kubenswrapper[4937]: I0123 07:29:44.814896 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/966b2713-8da2-44d3-b27d-9cf743de26ad-catalog-content\") pod \"certified-operators-rcw4f\" (UID: \"966b2713-8da2-44d3-b27d-9cf743de26ad\") " pod="openshift-marketplace/certified-operators-rcw4f" Jan 23 07:29:44 crc kubenswrapper[4937]: I0123 07:29:44.839047 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79mrp\" (UniqueName: \"kubernetes.io/projected/966b2713-8da2-44d3-b27d-9cf743de26ad-kube-api-access-79mrp\") pod \"certified-operators-rcw4f\" (UID: \"966b2713-8da2-44d3-b27d-9cf743de26ad\") " pod="openshift-marketplace/certified-operators-rcw4f" Jan 23 07:29:44 crc kubenswrapper[4937]: I0123 07:29:44.955563 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcw4f" Jan 23 07:29:45 crc kubenswrapper[4937]: I0123 07:29:45.528375 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rcw4f"] Jan 23 07:29:45 crc kubenswrapper[4937]: I0123 07:29:45.653253 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcw4f" event={"ID":"966b2713-8da2-44d3-b27d-9cf743de26ad","Type":"ContainerStarted","Data":"23bd11f63cc35de872333f5a876d929d32892b7a3f38942a137dc5adddf9a928"} Jan 23 07:29:46 crc kubenswrapper[4937]: I0123 07:29:46.664981 4937 generic.go:334] "Generic (PLEG): container finished" podID="966b2713-8da2-44d3-b27d-9cf743de26ad" containerID="1b341da2fcfe24ffb57e05a94bfc92f953cf19158509ff7811156667693f0793" exitCode=0 Jan 23 07:29:46 crc kubenswrapper[4937]: I0123 07:29:46.665247 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcw4f" event={"ID":"966b2713-8da2-44d3-b27d-9cf743de26ad","Type":"ContainerDied","Data":"1b341da2fcfe24ffb57e05a94bfc92f953cf19158509ff7811156667693f0793"} Jan 23 07:29:47 crc kubenswrapper[4937]: I0123 07:29:47.676024 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcw4f" event={"ID":"966b2713-8da2-44d3-b27d-9cf743de26ad","Type":"ContainerStarted","Data":"4c2191c3144e15ff37025958fba4fba526ee4b5c402dbd6deddbf37f50751772"} Jan 23 07:29:48 crc kubenswrapper[4937]: I0123 07:29:48.533449 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:29:48 crc kubenswrapper[4937]: E0123 07:29:48.534140 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:29:48 crc kubenswrapper[4937]: I0123 07:29:48.685829 4937 generic.go:334] "Generic (PLEG): container finished" podID="966b2713-8da2-44d3-b27d-9cf743de26ad" containerID="4c2191c3144e15ff37025958fba4fba526ee4b5c402dbd6deddbf37f50751772" exitCode=0 Jan 23 07:29:48 crc kubenswrapper[4937]: I0123 07:29:48.685867 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcw4f" event={"ID":"966b2713-8da2-44d3-b27d-9cf743de26ad","Type":"ContainerDied","Data":"4c2191c3144e15ff37025958fba4fba526ee4b5c402dbd6deddbf37f50751772"} Jan 23 07:29:49 crc kubenswrapper[4937]: I0123 07:29:49.715760 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcw4f" event={"ID":"966b2713-8da2-44d3-b27d-9cf743de26ad","Type":"ContainerStarted","Data":"1ff92dcb38c6e8cee00eb853e15f491bcaa77cadd2abbd7267774e1328b5981b"} Jan 23 07:29:49 crc kubenswrapper[4937]: I0123 07:29:49.746182 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rcw4f" podStartSLOduration=3.307918252 podStartE2EDuration="5.746153044s" podCreationTimestamp="2026-01-23 07:29:44 +0000 UTC" firstStartedPulling="2026-01-23 07:29:46.667221975 +0000 UTC m=+3386.470988628" lastFinishedPulling="2026-01-23 07:29:49.105456747 +0000 UTC m=+3388.909223420" observedRunningTime="2026-01-23 07:29:49.738498387 +0000 UTC m=+3389.542265040" watchObservedRunningTime="2026-01-23 07:29:49.746153044 +0000 UTC m=+3389.549919697" Jan 23 07:29:52 crc kubenswrapper[4937]: I0123 07:29:52.249994 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k69gl" Jan 23 07:29:52 crc kubenswrapper[4937]: I0123 07:29:52.324133 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k69gl" Jan 23 07:29:52 crc kubenswrapper[4937]: I0123 07:29:52.987407 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k69gl"] Jan 23 07:29:53 crc kubenswrapper[4937]: I0123 07:29:53.749911 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k69gl" podUID="a5cceff6-2e86-46e3-b986-6eddb345d60d" containerName="registry-server" containerID="cri-o://65a90f94db7138f977b9e7f992fd71570df125bb28fe3eee03e226386c0ce5a5" gracePeriod=2 Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.272096 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k69gl" Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.343220 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q8kb\" (UniqueName: \"kubernetes.io/projected/a5cceff6-2e86-46e3-b986-6eddb345d60d-kube-api-access-9q8kb\") pod \"a5cceff6-2e86-46e3-b986-6eddb345d60d\" (UID: \"a5cceff6-2e86-46e3-b986-6eddb345d60d\") " Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.343351 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5cceff6-2e86-46e3-b986-6eddb345d60d-utilities\") pod \"a5cceff6-2e86-46e3-b986-6eddb345d60d\" (UID: \"a5cceff6-2e86-46e3-b986-6eddb345d60d\") " Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.343462 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5cceff6-2e86-46e3-b986-6eddb345d60d-catalog-content\") pod \"a5cceff6-2e86-46e3-b986-6eddb345d60d\" (UID: \"a5cceff6-2e86-46e3-b986-6eddb345d60d\") " Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.345206 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5cceff6-2e86-46e3-b986-6eddb345d60d-utilities" (OuterVolumeSpecName: "utilities") pod "a5cceff6-2e86-46e3-b986-6eddb345d60d" (UID: "a5cceff6-2e86-46e3-b986-6eddb345d60d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.350839 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5cceff6-2e86-46e3-b986-6eddb345d60d-kube-api-access-9q8kb" (OuterVolumeSpecName: "kube-api-access-9q8kb") pod "a5cceff6-2e86-46e3-b986-6eddb345d60d" (UID: "a5cceff6-2e86-46e3-b986-6eddb345d60d"). InnerVolumeSpecName "kube-api-access-9q8kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.446679 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q8kb\" (UniqueName: \"kubernetes.io/projected/a5cceff6-2e86-46e3-b986-6eddb345d60d-kube-api-access-9q8kb\") on node \"crc\" DevicePath \"\"" Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.446727 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5cceff6-2e86-46e3-b986-6eddb345d60d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.465256 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5cceff6-2e86-46e3-b986-6eddb345d60d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5cceff6-2e86-46e3-b986-6eddb345d60d" (UID: "a5cceff6-2e86-46e3-b986-6eddb345d60d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.548028 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5cceff6-2e86-46e3-b986-6eddb345d60d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.764543 4937 generic.go:334] "Generic (PLEG): container finished" podID="a5cceff6-2e86-46e3-b986-6eddb345d60d" containerID="65a90f94db7138f977b9e7f992fd71570df125bb28fe3eee03e226386c0ce5a5" exitCode=0 Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.764635 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k69gl" event={"ID":"a5cceff6-2e86-46e3-b986-6eddb345d60d","Type":"ContainerDied","Data":"65a90f94db7138f977b9e7f992fd71570df125bb28fe3eee03e226386c0ce5a5"} Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.764646 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k69gl" Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.764689 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k69gl" event={"ID":"a5cceff6-2e86-46e3-b986-6eddb345d60d","Type":"ContainerDied","Data":"93070eebb323f5cbc2f4ebf10a2def396373055753d1df666d0333530d97c1c9"} Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.764718 4937 scope.go:117] "RemoveContainer" containerID="65a90f94db7138f977b9e7f992fd71570df125bb28fe3eee03e226386c0ce5a5" Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.813699 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k69gl"] Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.814176 4937 scope.go:117] "RemoveContainer" containerID="72a0b1902ee59c7f1a19a3d2d4c50b11fb638f1f92fa1639ac5082b31da6ba9a" Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.823517 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k69gl"] Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.843583 4937 scope.go:117] "RemoveContainer" containerID="74fd4b37d70da831e8ef79fb0b195cf3aca3f33c61105856251e63fd775672f6" Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.893309 4937 scope.go:117] "RemoveContainer" containerID="65a90f94db7138f977b9e7f992fd71570df125bb28fe3eee03e226386c0ce5a5" Jan 23 07:29:54 crc kubenswrapper[4937]: E0123 07:29:54.894219 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65a90f94db7138f977b9e7f992fd71570df125bb28fe3eee03e226386c0ce5a5\": container with ID starting with 65a90f94db7138f977b9e7f992fd71570df125bb28fe3eee03e226386c0ce5a5 not found: ID does not exist" containerID="65a90f94db7138f977b9e7f992fd71570df125bb28fe3eee03e226386c0ce5a5" Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.894286 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65a90f94db7138f977b9e7f992fd71570df125bb28fe3eee03e226386c0ce5a5"} err="failed to get container status \"65a90f94db7138f977b9e7f992fd71570df125bb28fe3eee03e226386c0ce5a5\": rpc error: code = NotFound desc = could not find container \"65a90f94db7138f977b9e7f992fd71570df125bb28fe3eee03e226386c0ce5a5\": container with ID starting with 65a90f94db7138f977b9e7f992fd71570df125bb28fe3eee03e226386c0ce5a5 not found: ID does not exist" Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.894329 4937 scope.go:117] "RemoveContainer" containerID="72a0b1902ee59c7f1a19a3d2d4c50b11fb638f1f92fa1639ac5082b31da6ba9a" Jan 23 07:29:54 crc kubenswrapper[4937]: E0123 07:29:54.894843 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72a0b1902ee59c7f1a19a3d2d4c50b11fb638f1f92fa1639ac5082b31da6ba9a\": container with ID starting with 72a0b1902ee59c7f1a19a3d2d4c50b11fb638f1f92fa1639ac5082b31da6ba9a not found: ID does not exist" containerID="72a0b1902ee59c7f1a19a3d2d4c50b11fb638f1f92fa1639ac5082b31da6ba9a" Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.894874 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72a0b1902ee59c7f1a19a3d2d4c50b11fb638f1f92fa1639ac5082b31da6ba9a"} err="failed to get container status \"72a0b1902ee59c7f1a19a3d2d4c50b11fb638f1f92fa1639ac5082b31da6ba9a\": rpc error: code = NotFound desc = could not find container \"72a0b1902ee59c7f1a19a3d2d4c50b11fb638f1f92fa1639ac5082b31da6ba9a\": container with ID starting with 72a0b1902ee59c7f1a19a3d2d4c50b11fb638f1f92fa1639ac5082b31da6ba9a not found: ID does not exist" Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.894896 4937 scope.go:117] "RemoveContainer" containerID="74fd4b37d70da831e8ef79fb0b195cf3aca3f33c61105856251e63fd775672f6" Jan 23 07:29:54 crc kubenswrapper[4937]: E0123 07:29:54.895359 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74fd4b37d70da831e8ef79fb0b195cf3aca3f33c61105856251e63fd775672f6\": container with ID starting with 74fd4b37d70da831e8ef79fb0b195cf3aca3f33c61105856251e63fd775672f6 not found: ID does not exist" containerID="74fd4b37d70da831e8ef79fb0b195cf3aca3f33c61105856251e63fd775672f6" Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.895401 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74fd4b37d70da831e8ef79fb0b195cf3aca3f33c61105856251e63fd775672f6"} err="failed to get container status \"74fd4b37d70da831e8ef79fb0b195cf3aca3f33c61105856251e63fd775672f6\": rpc error: code = NotFound desc = could not find container \"74fd4b37d70da831e8ef79fb0b195cf3aca3f33c61105856251e63fd775672f6\": container with ID starting with 74fd4b37d70da831e8ef79fb0b195cf3aca3f33c61105856251e63fd775672f6 not found: ID does not exist" Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.956302 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rcw4f" Jan 23 07:29:54 crc kubenswrapper[4937]: I0123 07:29:54.956364 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rcw4f" Jan 23 07:29:55 crc kubenswrapper[4937]: I0123 07:29:55.020655 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rcw4f" Jan 23 07:29:55 crc kubenswrapper[4937]: I0123 07:29:55.832766 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rcw4f" Jan 23 07:29:56 crc kubenswrapper[4937]: I0123 07:29:56.541144 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5cceff6-2e86-46e3-b986-6eddb345d60d" path="/var/lib/kubelet/pods/a5cceff6-2e86-46e3-b986-6eddb345d60d/volumes" Jan 23 07:29:57 crc kubenswrapper[4937]: I0123 07:29:57.387954 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rcw4f"] Jan 23 07:29:57 crc kubenswrapper[4937]: I0123 07:29:57.799015 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rcw4f" podUID="966b2713-8da2-44d3-b27d-9cf743de26ad" containerName="registry-server" containerID="cri-o://1ff92dcb38c6e8cee00eb853e15f491bcaa77cadd2abbd7267774e1328b5981b" gracePeriod=2 Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.321542 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcw4f" Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.425739 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/966b2713-8da2-44d3-b27d-9cf743de26ad-catalog-content\") pod \"966b2713-8da2-44d3-b27d-9cf743de26ad\" (UID: \"966b2713-8da2-44d3-b27d-9cf743de26ad\") " Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.425929 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/966b2713-8da2-44d3-b27d-9cf743de26ad-utilities\") pod \"966b2713-8da2-44d3-b27d-9cf743de26ad\" (UID: \"966b2713-8da2-44d3-b27d-9cf743de26ad\") " Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.425961 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79mrp\" (UniqueName: \"kubernetes.io/projected/966b2713-8da2-44d3-b27d-9cf743de26ad-kube-api-access-79mrp\") pod \"966b2713-8da2-44d3-b27d-9cf743de26ad\" (UID: \"966b2713-8da2-44d3-b27d-9cf743de26ad\") " Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.426773 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/966b2713-8da2-44d3-b27d-9cf743de26ad-utilities" (OuterVolumeSpecName: "utilities") pod "966b2713-8da2-44d3-b27d-9cf743de26ad" (UID: "966b2713-8da2-44d3-b27d-9cf743de26ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.431661 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/966b2713-8da2-44d3-b27d-9cf743de26ad-kube-api-access-79mrp" (OuterVolumeSpecName: "kube-api-access-79mrp") pod "966b2713-8da2-44d3-b27d-9cf743de26ad" (UID: "966b2713-8da2-44d3-b27d-9cf743de26ad"). InnerVolumeSpecName "kube-api-access-79mrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.472071 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/966b2713-8da2-44d3-b27d-9cf743de26ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "966b2713-8da2-44d3-b27d-9cf743de26ad" (UID: "966b2713-8da2-44d3-b27d-9cf743de26ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.528274 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79mrp\" (UniqueName: \"kubernetes.io/projected/966b2713-8da2-44d3-b27d-9cf743de26ad-kube-api-access-79mrp\") on node \"crc\" DevicePath \"\"" Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.528334 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/966b2713-8da2-44d3-b27d-9cf743de26ad-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.528348 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/966b2713-8da2-44d3-b27d-9cf743de26ad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.810272 4937 generic.go:334] "Generic (PLEG): container finished" podID="966b2713-8da2-44d3-b27d-9cf743de26ad" containerID="1ff92dcb38c6e8cee00eb853e15f491bcaa77cadd2abbd7267774e1328b5981b" exitCode=0 Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.810323 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcw4f" event={"ID":"966b2713-8da2-44d3-b27d-9cf743de26ad","Type":"ContainerDied","Data":"1ff92dcb38c6e8cee00eb853e15f491bcaa77cadd2abbd7267774e1328b5981b"} Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.810330 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rcw4f" Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.810363 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rcw4f" event={"ID":"966b2713-8da2-44d3-b27d-9cf743de26ad","Type":"ContainerDied","Data":"23bd11f63cc35de872333f5a876d929d32892b7a3f38942a137dc5adddf9a928"} Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.810379 4937 scope.go:117] "RemoveContainer" containerID="1ff92dcb38c6e8cee00eb853e15f491bcaa77cadd2abbd7267774e1328b5981b" Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.840140 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rcw4f"] Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.842728 4937 scope.go:117] "RemoveContainer" containerID="4c2191c3144e15ff37025958fba4fba526ee4b5c402dbd6deddbf37f50751772" Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.848743 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rcw4f"] Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.871950 4937 scope.go:117] "RemoveContainer" containerID="1b341da2fcfe24ffb57e05a94bfc92f953cf19158509ff7811156667693f0793" Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.928249 4937 scope.go:117] "RemoveContainer" containerID="1ff92dcb38c6e8cee00eb853e15f491bcaa77cadd2abbd7267774e1328b5981b" Jan 23 07:29:58 crc kubenswrapper[4937]: E0123 07:29:58.928844 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ff92dcb38c6e8cee00eb853e15f491bcaa77cadd2abbd7267774e1328b5981b\": container with ID starting with 1ff92dcb38c6e8cee00eb853e15f491bcaa77cadd2abbd7267774e1328b5981b not found: ID does not exist" containerID="1ff92dcb38c6e8cee00eb853e15f491bcaa77cadd2abbd7267774e1328b5981b" Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.928895 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ff92dcb38c6e8cee00eb853e15f491bcaa77cadd2abbd7267774e1328b5981b"} err="failed to get container status \"1ff92dcb38c6e8cee00eb853e15f491bcaa77cadd2abbd7267774e1328b5981b\": rpc error: code = NotFound desc = could not find container \"1ff92dcb38c6e8cee00eb853e15f491bcaa77cadd2abbd7267774e1328b5981b\": container with ID starting with 1ff92dcb38c6e8cee00eb853e15f491bcaa77cadd2abbd7267774e1328b5981b not found: ID does not exist" Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.928923 4937 scope.go:117] "RemoveContainer" containerID="4c2191c3144e15ff37025958fba4fba526ee4b5c402dbd6deddbf37f50751772" Jan 23 07:29:58 crc kubenswrapper[4937]: E0123 07:29:58.929443 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c2191c3144e15ff37025958fba4fba526ee4b5c402dbd6deddbf37f50751772\": container with ID starting with 4c2191c3144e15ff37025958fba4fba526ee4b5c402dbd6deddbf37f50751772 not found: ID does not exist" containerID="4c2191c3144e15ff37025958fba4fba526ee4b5c402dbd6deddbf37f50751772" Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.929469 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c2191c3144e15ff37025958fba4fba526ee4b5c402dbd6deddbf37f50751772"} err="failed to get container status \"4c2191c3144e15ff37025958fba4fba526ee4b5c402dbd6deddbf37f50751772\": rpc error: code = NotFound desc = could not find container \"4c2191c3144e15ff37025958fba4fba526ee4b5c402dbd6deddbf37f50751772\": container with ID starting with 4c2191c3144e15ff37025958fba4fba526ee4b5c402dbd6deddbf37f50751772 not found: ID does not exist" Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.929483 4937 scope.go:117] "RemoveContainer" containerID="1b341da2fcfe24ffb57e05a94bfc92f953cf19158509ff7811156667693f0793" Jan 23 07:29:58 crc kubenswrapper[4937]: E0123 07:29:58.929992 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b341da2fcfe24ffb57e05a94bfc92f953cf19158509ff7811156667693f0793\": container with ID starting with 1b341da2fcfe24ffb57e05a94bfc92f953cf19158509ff7811156667693f0793 not found: ID does not exist" containerID="1b341da2fcfe24ffb57e05a94bfc92f953cf19158509ff7811156667693f0793" Jan 23 07:29:58 crc kubenswrapper[4937]: I0123 07:29:58.930032 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b341da2fcfe24ffb57e05a94bfc92f953cf19158509ff7811156667693f0793"} err="failed to get container status \"1b341da2fcfe24ffb57e05a94bfc92f953cf19158509ff7811156667693f0793\": rpc error: code = NotFound desc = could not find container \"1b341da2fcfe24ffb57e05a94bfc92f953cf19158509ff7811156667693f0793\": container with ID starting with 1b341da2fcfe24ffb57e05a94bfc92f953cf19158509ff7811156667693f0793 not found: ID does not exist" Jan 23 07:29:59 crc kubenswrapper[4937]: I0123 07:29:59.526526 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:29:59 crc kubenswrapper[4937]: E0123 07:29:59.527615 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.162481 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct"] Jan 23 07:30:00 crc kubenswrapper[4937]: E0123 07:30:00.162945 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966b2713-8da2-44d3-b27d-9cf743de26ad" containerName="extract-utilities" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.162962 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="966b2713-8da2-44d3-b27d-9cf743de26ad" containerName="extract-utilities" Jan 23 07:30:00 crc kubenswrapper[4937]: E0123 07:30:00.162987 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5cceff6-2e86-46e3-b986-6eddb345d60d" containerName="extract-utilities" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.162993 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5cceff6-2e86-46e3-b986-6eddb345d60d" containerName="extract-utilities" Jan 23 07:30:00 crc kubenswrapper[4937]: E0123 07:30:00.163008 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966b2713-8da2-44d3-b27d-9cf743de26ad" containerName="registry-server" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.163014 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="966b2713-8da2-44d3-b27d-9cf743de26ad" containerName="registry-server" Jan 23 07:30:00 crc kubenswrapper[4937]: E0123 07:30:00.163025 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5cceff6-2e86-46e3-b986-6eddb345d60d" containerName="extract-content" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.163030 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5cceff6-2e86-46e3-b986-6eddb345d60d" containerName="extract-content" Jan 23 07:30:00 crc kubenswrapper[4937]: E0123 07:30:00.163043 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5cceff6-2e86-46e3-b986-6eddb345d60d" containerName="registry-server" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.163049 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5cceff6-2e86-46e3-b986-6eddb345d60d" containerName="registry-server" Jan 23 07:30:00 crc kubenswrapper[4937]: E0123 07:30:00.163056 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966b2713-8da2-44d3-b27d-9cf743de26ad" containerName="extract-content" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.163061 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="966b2713-8da2-44d3-b27d-9cf743de26ad" containerName="extract-content" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.163263 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5cceff6-2e86-46e3-b986-6eddb345d60d" containerName="registry-server" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.163284 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="966b2713-8da2-44d3-b27d-9cf743de26ad" containerName="registry-server" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.164026 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.167619 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.167836 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.176971 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct"] Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.263939 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/431a4044-87df-4912-ba91-8834cbba5091-config-volume\") pod \"collect-profiles-29485890-mgdct\" (UID: \"431a4044-87df-4912-ba91-8834cbba5091\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.264113 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/431a4044-87df-4912-ba91-8834cbba5091-secret-volume\") pod \"collect-profiles-29485890-mgdct\" (UID: \"431a4044-87df-4912-ba91-8834cbba5091\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.264427 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phglj\" (UniqueName: \"kubernetes.io/projected/431a4044-87df-4912-ba91-8834cbba5091-kube-api-access-phglj\") pod \"collect-profiles-29485890-mgdct\" (UID: \"431a4044-87df-4912-ba91-8834cbba5091\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.366082 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phglj\" (UniqueName: \"kubernetes.io/projected/431a4044-87df-4912-ba91-8834cbba5091-kube-api-access-phglj\") pod \"collect-profiles-29485890-mgdct\" (UID: \"431a4044-87df-4912-ba91-8834cbba5091\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.366168 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/431a4044-87df-4912-ba91-8834cbba5091-config-volume\") pod \"collect-profiles-29485890-mgdct\" (UID: \"431a4044-87df-4912-ba91-8834cbba5091\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.366319 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/431a4044-87df-4912-ba91-8834cbba5091-secret-volume\") pod \"collect-profiles-29485890-mgdct\" (UID: \"431a4044-87df-4912-ba91-8834cbba5091\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.367850 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/431a4044-87df-4912-ba91-8834cbba5091-config-volume\") pod \"collect-profiles-29485890-mgdct\" (UID: \"431a4044-87df-4912-ba91-8834cbba5091\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.380807 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/431a4044-87df-4912-ba91-8834cbba5091-secret-volume\") pod \"collect-profiles-29485890-mgdct\" (UID: \"431a4044-87df-4912-ba91-8834cbba5091\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.391860 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phglj\" (UniqueName: \"kubernetes.io/projected/431a4044-87df-4912-ba91-8834cbba5091-kube-api-access-phglj\") pod \"collect-profiles-29485890-mgdct\" (UID: \"431a4044-87df-4912-ba91-8834cbba5091\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.487761 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.541293 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="966b2713-8da2-44d3-b27d-9cf743de26ad" path="/var/lib/kubelet/pods/966b2713-8da2-44d3-b27d-9cf743de26ad/volumes" Jan 23 07:30:00 crc kubenswrapper[4937]: I0123 07:30:00.946988 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct"] Jan 23 07:30:01 crc kubenswrapper[4937]: I0123 07:30:01.848879 4937 generic.go:334] "Generic (PLEG): container finished" podID="431a4044-87df-4912-ba91-8834cbba5091" containerID="a40aa4bc4ba29678bcd240cbd82538dcee11dac2ba9699124ba892513f6164f0" exitCode=0 Jan 23 07:30:01 crc kubenswrapper[4937]: I0123 07:30:01.848972 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct" event={"ID":"431a4044-87df-4912-ba91-8834cbba5091","Type":"ContainerDied","Data":"a40aa4bc4ba29678bcd240cbd82538dcee11dac2ba9699124ba892513f6164f0"} Jan 23 07:30:01 crc kubenswrapper[4937]: I0123 07:30:01.849236 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct" event={"ID":"431a4044-87df-4912-ba91-8834cbba5091","Type":"ContainerStarted","Data":"de1abd8c6ddc4eed8829ed3a3085b7f9d5cf6a2da0f72d755470349600d9495a"} Jan 23 07:30:03 crc kubenswrapper[4937]: I0123 07:30:03.287819 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct" Jan 23 07:30:03 crc kubenswrapper[4937]: I0123 07:30:03.333053 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/431a4044-87df-4912-ba91-8834cbba5091-secret-volume\") pod \"431a4044-87df-4912-ba91-8834cbba5091\" (UID: \"431a4044-87df-4912-ba91-8834cbba5091\") " Jan 23 07:30:03 crc kubenswrapper[4937]: I0123 07:30:03.333228 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phglj\" (UniqueName: \"kubernetes.io/projected/431a4044-87df-4912-ba91-8834cbba5091-kube-api-access-phglj\") pod \"431a4044-87df-4912-ba91-8834cbba5091\" (UID: \"431a4044-87df-4912-ba91-8834cbba5091\") " Jan 23 07:30:03 crc kubenswrapper[4937]: I0123 07:30:03.333309 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/431a4044-87df-4912-ba91-8834cbba5091-config-volume\") pod \"431a4044-87df-4912-ba91-8834cbba5091\" (UID: \"431a4044-87df-4912-ba91-8834cbba5091\") " Jan 23 07:30:03 crc kubenswrapper[4937]: I0123 07:30:03.333922 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/431a4044-87df-4912-ba91-8834cbba5091-config-volume" (OuterVolumeSpecName: "config-volume") pod "431a4044-87df-4912-ba91-8834cbba5091" (UID: "431a4044-87df-4912-ba91-8834cbba5091"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:30:03 crc kubenswrapper[4937]: I0123 07:30:03.334418 4937 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/431a4044-87df-4912-ba91-8834cbba5091-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 07:30:03 crc kubenswrapper[4937]: I0123 07:30:03.339237 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/431a4044-87df-4912-ba91-8834cbba5091-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "431a4044-87df-4912-ba91-8834cbba5091" (UID: "431a4044-87df-4912-ba91-8834cbba5091"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:30:03 crc kubenswrapper[4937]: I0123 07:30:03.339572 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/431a4044-87df-4912-ba91-8834cbba5091-kube-api-access-phglj" (OuterVolumeSpecName: "kube-api-access-phglj") pod "431a4044-87df-4912-ba91-8834cbba5091" (UID: "431a4044-87df-4912-ba91-8834cbba5091"). InnerVolumeSpecName "kube-api-access-phglj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:30:03 crc kubenswrapper[4937]: I0123 07:30:03.436562 4937 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/431a4044-87df-4912-ba91-8834cbba5091-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 07:30:03 crc kubenswrapper[4937]: I0123 07:30:03.436980 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phglj\" (UniqueName: \"kubernetes.io/projected/431a4044-87df-4912-ba91-8834cbba5091-kube-api-access-phglj\") on node \"crc\" DevicePath \"\"" Jan 23 07:30:03 crc kubenswrapper[4937]: I0123 07:30:03.868749 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct" event={"ID":"431a4044-87df-4912-ba91-8834cbba5091","Type":"ContainerDied","Data":"de1abd8c6ddc4eed8829ed3a3085b7f9d5cf6a2da0f72d755470349600d9495a"} Jan 23 07:30:03 crc kubenswrapper[4937]: I0123 07:30:03.868971 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de1abd8c6ddc4eed8829ed3a3085b7f9d5cf6a2da0f72d755470349600d9495a" Jan 23 07:30:03 crc kubenswrapper[4937]: I0123 07:30:03.868785 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct" Jan 23 07:30:04 crc kubenswrapper[4937]: I0123 07:30:04.384573 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf"] Jan 23 07:30:04 crc kubenswrapper[4937]: I0123 07:30:04.394442 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485845-tf8tf"] Jan 23 07:30:04 crc kubenswrapper[4937]: I0123 07:30:04.538056 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d742ae3a-78f8-4ba5-9722-d385565718e3" path="/var/lib/kubelet/pods/d742ae3a-78f8-4ba5-9722-d385565718e3/volumes" Jan 23 07:30:08 crc kubenswrapper[4937]: I0123 07:30:08.777170 4937 scope.go:117] "RemoveContainer" containerID="864264824ced3c77b9279620a7150395c9a4df6deb72da12ea14c11eb2156891" Jan 23 07:30:13 crc kubenswrapper[4937]: I0123 07:30:13.527071 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:30:13 crc kubenswrapper[4937]: E0123 07:30:13.528302 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:30:25 crc kubenswrapper[4937]: I0123 07:30:25.527080 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:30:25 crc kubenswrapper[4937]: E0123 07:30:25.528640 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:30:39 crc kubenswrapper[4937]: I0123 07:30:39.526241 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:30:39 crc kubenswrapper[4937]: E0123 07:30:39.527266 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:30:53 crc kubenswrapper[4937]: I0123 07:30:53.527171 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:30:53 crc kubenswrapper[4937]: E0123 07:30:53.528278 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:31:04 crc kubenswrapper[4937]: I0123 07:31:04.527080 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:31:04 crc kubenswrapper[4937]: E0123 07:31:04.528110 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:31:17 crc kubenswrapper[4937]: I0123 07:31:17.526828 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:31:18 crc kubenswrapper[4937]: I0123 07:31:18.679847 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"42bd4d42c3e8267bf4a52b2a98b7f0d0a5c5c6594069ffc43b2ef619d0de7d61"} Jan 23 07:33:37 crc kubenswrapper[4937]: I0123 07:33:37.724395 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:33:37 crc kubenswrapper[4937]: I0123 07:33:37.724883 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:33:39 crc kubenswrapper[4937]: I0123 07:33:39.691081 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p6dzs"] Jan 23 07:33:39 crc kubenswrapper[4937]: E0123 07:33:39.692040 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="431a4044-87df-4912-ba91-8834cbba5091" containerName="collect-profiles" Jan 23 07:33:39 crc kubenswrapper[4937]: I0123 07:33:39.692062 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="431a4044-87df-4912-ba91-8834cbba5091" containerName="collect-profiles" Jan 23 07:33:39 crc kubenswrapper[4937]: I0123 07:33:39.692404 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="431a4044-87df-4912-ba91-8834cbba5091" containerName="collect-profiles" Jan 23 07:33:39 crc kubenswrapper[4937]: I0123 07:33:39.694309 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6dzs" Jan 23 07:33:39 crc kubenswrapper[4937]: I0123 07:33:39.725631 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6dzs"] Jan 23 07:33:39 crc kubenswrapper[4937]: I0123 07:33:39.753583 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj2cv\" (UniqueName: \"kubernetes.io/projected/bb293b90-f69e-4e44-b357-f0fd6d867022-kube-api-access-jj2cv\") pod \"community-operators-p6dzs\" (UID: \"bb293b90-f69e-4e44-b357-f0fd6d867022\") " pod="openshift-marketplace/community-operators-p6dzs" Jan 23 07:33:39 crc kubenswrapper[4937]: I0123 07:33:39.754703 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb293b90-f69e-4e44-b357-f0fd6d867022-catalog-content\") pod \"community-operators-p6dzs\" (UID: \"bb293b90-f69e-4e44-b357-f0fd6d867022\") " pod="openshift-marketplace/community-operators-p6dzs" Jan 23 07:33:39 crc kubenswrapper[4937]: I0123 07:33:39.755046 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb293b90-f69e-4e44-b357-f0fd6d867022-utilities\") pod \"community-operators-p6dzs\" (UID: \"bb293b90-f69e-4e44-b357-f0fd6d867022\") " pod="openshift-marketplace/community-operators-p6dzs" Jan 23 07:33:39 crc kubenswrapper[4937]: I0123 07:33:39.856992 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb293b90-f69e-4e44-b357-f0fd6d867022-utilities\") pod \"community-operators-p6dzs\" (UID: \"bb293b90-f69e-4e44-b357-f0fd6d867022\") " pod="openshift-marketplace/community-operators-p6dzs" Jan 23 07:33:39 crc kubenswrapper[4937]: I0123 07:33:39.857168 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj2cv\" (UniqueName: \"kubernetes.io/projected/bb293b90-f69e-4e44-b357-f0fd6d867022-kube-api-access-jj2cv\") pod \"community-operators-p6dzs\" (UID: \"bb293b90-f69e-4e44-b357-f0fd6d867022\") " pod="openshift-marketplace/community-operators-p6dzs" Jan 23 07:33:39 crc kubenswrapper[4937]: I0123 07:33:39.857202 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb293b90-f69e-4e44-b357-f0fd6d867022-catalog-content\") pod \"community-operators-p6dzs\" (UID: \"bb293b90-f69e-4e44-b357-f0fd6d867022\") " pod="openshift-marketplace/community-operators-p6dzs" Jan 23 07:33:39 crc kubenswrapper[4937]: I0123 07:33:39.857893 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb293b90-f69e-4e44-b357-f0fd6d867022-utilities\") pod \"community-operators-p6dzs\" (UID: \"bb293b90-f69e-4e44-b357-f0fd6d867022\") " pod="openshift-marketplace/community-operators-p6dzs" Jan 23 07:33:39 crc kubenswrapper[4937]: I0123 07:33:39.860795 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb293b90-f69e-4e44-b357-f0fd6d867022-catalog-content\") pod \"community-operators-p6dzs\" (UID: \"bb293b90-f69e-4e44-b357-f0fd6d867022\") " pod="openshift-marketplace/community-operators-p6dzs" Jan 23 07:33:39 crc kubenswrapper[4937]: I0123 07:33:39.879587 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj2cv\" (UniqueName: \"kubernetes.io/projected/bb293b90-f69e-4e44-b357-f0fd6d867022-kube-api-access-jj2cv\") pod \"community-operators-p6dzs\" (UID: \"bb293b90-f69e-4e44-b357-f0fd6d867022\") " pod="openshift-marketplace/community-operators-p6dzs" Jan 23 07:33:40 crc kubenswrapper[4937]: I0123 07:33:40.017244 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6dzs" Jan 23 07:33:40 crc kubenswrapper[4937]: I0123 07:33:40.674580 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6dzs"] Jan 23 07:33:41 crc kubenswrapper[4937]: I0123 07:33:41.359659 4937 generic.go:334] "Generic (PLEG): container finished" podID="bb293b90-f69e-4e44-b357-f0fd6d867022" containerID="fd9f7e2a4aaa04e331488000975861e8dfa94bea37135869c82731d75eb1bb4f" exitCode=0 Jan 23 07:33:41 crc kubenswrapper[4937]: I0123 07:33:41.359739 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6dzs" event={"ID":"bb293b90-f69e-4e44-b357-f0fd6d867022","Type":"ContainerDied","Data":"fd9f7e2a4aaa04e331488000975861e8dfa94bea37135869c82731d75eb1bb4f"} Jan 23 07:33:41 crc kubenswrapper[4937]: I0123 07:33:41.359789 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6dzs" event={"ID":"bb293b90-f69e-4e44-b357-f0fd6d867022","Type":"ContainerStarted","Data":"a353d31472854b787e9982f5137a7667bfd363b1342c86d05ac477ab7c2e40a3"} Jan 23 07:33:42 crc kubenswrapper[4937]: I0123 07:33:42.368857 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6dzs" event={"ID":"bb293b90-f69e-4e44-b357-f0fd6d867022","Type":"ContainerStarted","Data":"f1db5c5502f3c19556c373536f163befb8837d988962a1d6e029b2fb13fe8d4a"} Jan 23 07:33:43 crc kubenswrapper[4937]: I0123 07:33:43.378698 4937 generic.go:334] "Generic (PLEG): container finished" podID="bb293b90-f69e-4e44-b357-f0fd6d867022" containerID="f1db5c5502f3c19556c373536f163befb8837d988962a1d6e029b2fb13fe8d4a" exitCode=0 Jan 23 07:33:43 crc kubenswrapper[4937]: I0123 07:33:43.378770 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6dzs" event={"ID":"bb293b90-f69e-4e44-b357-f0fd6d867022","Type":"ContainerDied","Data":"f1db5c5502f3c19556c373536f163befb8837d988962a1d6e029b2fb13fe8d4a"} Jan 23 07:33:44 crc kubenswrapper[4937]: I0123 07:33:44.391229 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6dzs" event={"ID":"bb293b90-f69e-4e44-b357-f0fd6d867022","Type":"ContainerStarted","Data":"b76a434c6a6ba05c70faf8942f8686442243a5df12788e69bd9a9bac2b6adc1d"} Jan 23 07:33:44 crc kubenswrapper[4937]: I0123 07:33:44.414467 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p6dzs" podStartSLOduration=2.962095423 podStartE2EDuration="5.414447928s" podCreationTimestamp="2026-01-23 07:33:39 +0000 UTC" firstStartedPulling="2026-01-23 07:33:41.362740257 +0000 UTC m=+3621.166506920" lastFinishedPulling="2026-01-23 07:33:43.815092762 +0000 UTC m=+3623.618859425" observedRunningTime="2026-01-23 07:33:44.409252287 +0000 UTC m=+3624.213018960" watchObservedRunningTime="2026-01-23 07:33:44.414447928 +0000 UTC m=+3624.218214591" Jan 23 07:33:50 crc kubenswrapper[4937]: I0123 07:33:50.018261 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p6dzs" Jan 23 07:33:50 crc kubenswrapper[4937]: I0123 07:33:50.018891 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p6dzs" Jan 23 07:33:50 crc kubenswrapper[4937]: I0123 07:33:50.086909 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p6dzs" Jan 23 07:33:50 crc kubenswrapper[4937]: I0123 07:33:50.509767 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p6dzs" Jan 23 07:33:50 crc kubenswrapper[4937]: I0123 07:33:50.560455 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6dzs"] Jan 23 07:33:52 crc kubenswrapper[4937]: I0123 07:33:52.465284 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p6dzs" podUID="bb293b90-f69e-4e44-b357-f0fd6d867022" containerName="registry-server" containerID="cri-o://b76a434c6a6ba05c70faf8942f8686442243a5df12788e69bd9a9bac2b6adc1d" gracePeriod=2 Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.080287 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6dzs" Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.181136 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj2cv\" (UniqueName: \"kubernetes.io/projected/bb293b90-f69e-4e44-b357-f0fd6d867022-kube-api-access-jj2cv\") pod \"bb293b90-f69e-4e44-b357-f0fd6d867022\" (UID: \"bb293b90-f69e-4e44-b357-f0fd6d867022\") " Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.181357 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb293b90-f69e-4e44-b357-f0fd6d867022-catalog-content\") pod \"bb293b90-f69e-4e44-b357-f0fd6d867022\" (UID: \"bb293b90-f69e-4e44-b357-f0fd6d867022\") " Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.181430 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb293b90-f69e-4e44-b357-f0fd6d867022-utilities\") pod \"bb293b90-f69e-4e44-b357-f0fd6d867022\" (UID: \"bb293b90-f69e-4e44-b357-f0fd6d867022\") " Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.183197 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb293b90-f69e-4e44-b357-f0fd6d867022-utilities" (OuterVolumeSpecName: "utilities") pod "bb293b90-f69e-4e44-b357-f0fd6d867022" (UID: "bb293b90-f69e-4e44-b357-f0fd6d867022"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.187690 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb293b90-f69e-4e44-b357-f0fd6d867022-kube-api-access-jj2cv" (OuterVolumeSpecName: "kube-api-access-jj2cv") pod "bb293b90-f69e-4e44-b357-f0fd6d867022" (UID: "bb293b90-f69e-4e44-b357-f0fd6d867022"). InnerVolumeSpecName "kube-api-access-jj2cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.252845 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb293b90-f69e-4e44-b357-f0fd6d867022-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb293b90-f69e-4e44-b357-f0fd6d867022" (UID: "bb293b90-f69e-4e44-b357-f0fd6d867022"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.288611 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb293b90-f69e-4e44-b357-f0fd6d867022-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.288679 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb293b90-f69e-4e44-b357-f0fd6d867022-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.288695 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj2cv\" (UniqueName: \"kubernetes.io/projected/bb293b90-f69e-4e44-b357-f0fd6d867022-kube-api-access-jj2cv\") on node \"crc\" DevicePath \"\"" Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.474872 4937 generic.go:334] "Generic (PLEG): container finished" podID="bb293b90-f69e-4e44-b357-f0fd6d867022" containerID="b76a434c6a6ba05c70faf8942f8686442243a5df12788e69bd9a9bac2b6adc1d" exitCode=0 Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.476367 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6dzs" event={"ID":"bb293b90-f69e-4e44-b357-f0fd6d867022","Type":"ContainerDied","Data":"b76a434c6a6ba05c70faf8942f8686442243a5df12788e69bd9a9bac2b6adc1d"} Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.476532 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6dzs" event={"ID":"bb293b90-f69e-4e44-b357-f0fd6d867022","Type":"ContainerDied","Data":"a353d31472854b787e9982f5137a7667bfd363b1342c86d05ac477ab7c2e40a3"} Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.476708 4937 scope.go:117] "RemoveContainer" containerID="b76a434c6a6ba05c70faf8942f8686442243a5df12788e69bd9a9bac2b6adc1d" Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.477044 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6dzs" Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.502157 4937 scope.go:117] "RemoveContainer" containerID="f1db5c5502f3c19556c373536f163befb8837d988962a1d6e029b2fb13fe8d4a" Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.539330 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6dzs"] Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.545087 4937 scope.go:117] "RemoveContainer" containerID="fd9f7e2a4aaa04e331488000975861e8dfa94bea37135869c82731d75eb1bb4f" Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.551577 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p6dzs"] Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.605788 4937 scope.go:117] "RemoveContainer" containerID="b76a434c6a6ba05c70faf8942f8686442243a5df12788e69bd9a9bac2b6adc1d" Jan 23 07:33:53 crc kubenswrapper[4937]: E0123 07:33:53.606341 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b76a434c6a6ba05c70faf8942f8686442243a5df12788e69bd9a9bac2b6adc1d\": container with ID starting with b76a434c6a6ba05c70faf8942f8686442243a5df12788e69bd9a9bac2b6adc1d not found: ID does not exist" containerID="b76a434c6a6ba05c70faf8942f8686442243a5df12788e69bd9a9bac2b6adc1d" Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.606496 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76a434c6a6ba05c70faf8942f8686442243a5df12788e69bd9a9bac2b6adc1d"} err="failed to get container status \"b76a434c6a6ba05c70faf8942f8686442243a5df12788e69bd9a9bac2b6adc1d\": rpc error: code = NotFound desc = could not find container \"b76a434c6a6ba05c70faf8942f8686442243a5df12788e69bd9a9bac2b6adc1d\": container with ID starting with b76a434c6a6ba05c70faf8942f8686442243a5df12788e69bd9a9bac2b6adc1d not found: ID does not exist" Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.606538 4937 scope.go:117] "RemoveContainer" containerID="f1db5c5502f3c19556c373536f163befb8837d988962a1d6e029b2fb13fe8d4a" Jan 23 07:33:53 crc kubenswrapper[4937]: E0123 07:33:53.606972 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1db5c5502f3c19556c373536f163befb8837d988962a1d6e029b2fb13fe8d4a\": container with ID starting with f1db5c5502f3c19556c373536f163befb8837d988962a1d6e029b2fb13fe8d4a not found: ID does not exist" containerID="f1db5c5502f3c19556c373536f163befb8837d988962a1d6e029b2fb13fe8d4a" Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.607004 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1db5c5502f3c19556c373536f163befb8837d988962a1d6e029b2fb13fe8d4a"} err="failed to get container status \"f1db5c5502f3c19556c373536f163befb8837d988962a1d6e029b2fb13fe8d4a\": rpc error: code = NotFound desc = could not find container \"f1db5c5502f3c19556c373536f163befb8837d988962a1d6e029b2fb13fe8d4a\": container with ID starting with f1db5c5502f3c19556c373536f163befb8837d988962a1d6e029b2fb13fe8d4a not found: ID does not exist" Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.607053 4937 scope.go:117] "RemoveContainer" containerID="fd9f7e2a4aaa04e331488000975861e8dfa94bea37135869c82731d75eb1bb4f" Jan 23 07:33:53 crc kubenswrapper[4937]: E0123 07:33:53.607410 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd9f7e2a4aaa04e331488000975861e8dfa94bea37135869c82731d75eb1bb4f\": container with ID starting with fd9f7e2a4aaa04e331488000975861e8dfa94bea37135869c82731d75eb1bb4f not found: ID does not exist" containerID="fd9f7e2a4aaa04e331488000975861e8dfa94bea37135869c82731d75eb1bb4f" Jan 23 07:33:53 crc kubenswrapper[4937]: I0123 07:33:53.607467 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd9f7e2a4aaa04e331488000975861e8dfa94bea37135869c82731d75eb1bb4f"} err="failed to get container status \"fd9f7e2a4aaa04e331488000975861e8dfa94bea37135869c82731d75eb1bb4f\": rpc error: code = NotFound desc = could not find container \"fd9f7e2a4aaa04e331488000975861e8dfa94bea37135869c82731d75eb1bb4f\": container with ID starting with fd9f7e2a4aaa04e331488000975861e8dfa94bea37135869c82731d75eb1bb4f not found: ID does not exist" Jan 23 07:33:54 crc kubenswrapper[4937]: I0123 07:33:54.536976 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb293b90-f69e-4e44-b357-f0fd6d867022" path="/var/lib/kubelet/pods/bb293b90-f69e-4e44-b357-f0fd6d867022/volumes" Jan 23 07:34:07 crc kubenswrapper[4937]: I0123 07:34:07.724428 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:34:07 crc kubenswrapper[4937]: I0123 07:34:07.724942 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:34:37 crc kubenswrapper[4937]: I0123 07:34:37.723469 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:34:37 crc kubenswrapper[4937]: I0123 07:34:37.724097 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:34:37 crc kubenswrapper[4937]: I0123 07:34:37.724145 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 07:34:37 crc kubenswrapper[4937]: I0123 07:34:37.724895 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42bd4d42c3e8267bf4a52b2a98b7f0d0a5c5c6594069ffc43b2ef619d0de7d61"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:34:37 crc kubenswrapper[4937]: I0123 07:34:37.724950 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://42bd4d42c3e8267bf4a52b2a98b7f0d0a5c5c6594069ffc43b2ef619d0de7d61" gracePeriod=600 Jan 23 07:34:37 crc kubenswrapper[4937]: I0123 07:34:37.908982 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="42bd4d42c3e8267bf4a52b2a98b7f0d0a5c5c6594069ffc43b2ef619d0de7d61" exitCode=0 Jan 23 07:34:37 crc kubenswrapper[4937]: I0123 07:34:37.909042 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"42bd4d42c3e8267bf4a52b2a98b7f0d0a5c5c6594069ffc43b2ef619d0de7d61"} Jan 23 07:34:37 crc kubenswrapper[4937]: I0123 07:34:37.909290 4937 scope.go:117] "RemoveContainer" containerID="6635cbbe2754603c5a2d2249250b32e270a70997935f46eea470a820ceeef83c" Jan 23 07:34:38 crc kubenswrapper[4937]: I0123 07:34:38.923167 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6"} Jan 23 07:37:07 crc kubenswrapper[4937]: I0123 07:37:07.724461 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:37:07 crc kubenswrapper[4937]: I0123 07:37:07.725122 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:37:37 crc kubenswrapper[4937]: I0123 07:37:37.724503 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:37:37 crc kubenswrapper[4937]: I0123 07:37:37.725193 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:38:07 crc kubenswrapper[4937]: I0123 07:38:07.724303 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:38:07 crc kubenswrapper[4937]: I0123 07:38:07.724952 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:38:07 crc kubenswrapper[4937]: I0123 07:38:07.725007 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 07:38:07 crc kubenswrapper[4937]: I0123 07:38:07.725989 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:38:07 crc kubenswrapper[4937]: I0123 07:38:07.726063 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" gracePeriod=600 Jan 23 07:38:07 crc kubenswrapper[4937]: E0123 07:38:07.848819 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:38:08 crc kubenswrapper[4937]: I0123 07:38:08.112719 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" exitCode=0 Jan 23 07:38:08 crc kubenswrapper[4937]: I0123 07:38:08.112979 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6"} Jan 23 07:38:08 crc kubenswrapper[4937]: I0123 07:38:08.113012 4937 scope.go:117] "RemoveContainer" containerID="42bd4d42c3e8267bf4a52b2a98b7f0d0a5c5c6594069ffc43b2ef619d0de7d61" Jan 23 07:38:08 crc kubenswrapper[4937]: I0123 07:38:08.113705 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:38:08 crc kubenswrapper[4937]: E0123 07:38:08.113956 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:38:23 crc kubenswrapper[4937]: I0123 07:38:23.526902 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:38:23 crc kubenswrapper[4937]: E0123 07:38:23.527808 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:38:38 crc kubenswrapper[4937]: I0123 07:38:38.526507 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:38:38 crc kubenswrapper[4937]: E0123 07:38:38.527295 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:38:53 crc kubenswrapper[4937]: I0123 07:38:53.527165 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:38:53 crc kubenswrapper[4937]: E0123 07:38:53.528083 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:39:06 crc kubenswrapper[4937]: I0123 07:39:06.527277 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:39:06 crc kubenswrapper[4937]: E0123 07:39:06.528410 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:39:17 crc kubenswrapper[4937]: I0123 07:39:17.530413 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:39:17 crc kubenswrapper[4937]: E0123 07:39:17.531231 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:39:29 crc kubenswrapper[4937]: I0123 07:39:29.526634 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:39:29 crc kubenswrapper[4937]: E0123 07:39:29.527675 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:39:41 crc kubenswrapper[4937]: I0123 07:39:41.526274 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:39:41 crc kubenswrapper[4937]: E0123 07:39:41.527038 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:39:55 crc kubenswrapper[4937]: I0123 07:39:55.526654 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:39:55 crc kubenswrapper[4937]: E0123 07:39:55.527448 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:40:06 crc kubenswrapper[4937]: I0123 07:40:06.527693 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:40:06 crc kubenswrapper[4937]: E0123 07:40:06.529463 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:40:11 crc kubenswrapper[4937]: I0123 07:40:11.166841 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kfjfl"] Jan 23 07:40:11 crc kubenswrapper[4937]: E0123 07:40:11.167967 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb293b90-f69e-4e44-b357-f0fd6d867022" containerName="registry-server" Jan 23 07:40:11 crc kubenswrapper[4937]: I0123 07:40:11.167989 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb293b90-f69e-4e44-b357-f0fd6d867022" containerName="registry-server" Jan 23 07:40:11 crc kubenswrapper[4937]: E0123 07:40:11.168016 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb293b90-f69e-4e44-b357-f0fd6d867022" containerName="extract-utilities" Jan 23 07:40:11 crc kubenswrapper[4937]: I0123 07:40:11.168030 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb293b90-f69e-4e44-b357-f0fd6d867022" containerName="extract-utilities" Jan 23 07:40:11 crc kubenswrapper[4937]: E0123 07:40:11.168070 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb293b90-f69e-4e44-b357-f0fd6d867022" containerName="extract-content" Jan 23 07:40:11 crc kubenswrapper[4937]: I0123 07:40:11.168082 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb293b90-f69e-4e44-b357-f0fd6d867022" containerName="extract-content" Jan 23 07:40:11 crc kubenswrapper[4937]: I0123 07:40:11.168418 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb293b90-f69e-4e44-b357-f0fd6d867022" containerName="registry-server" Jan 23 07:40:11 crc kubenswrapper[4937]: I0123 07:40:11.171019 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfjfl" Jan 23 07:40:11 crc kubenswrapper[4937]: I0123 07:40:11.211905 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kfjfl"] Jan 23 07:40:11 crc kubenswrapper[4937]: I0123 07:40:11.336345 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77bcde11-13b9-42c0-a3c6-8d8ebab5302d-utilities\") pod \"redhat-operators-kfjfl\" (UID: \"77bcde11-13b9-42c0-a3c6-8d8ebab5302d\") " pod="openshift-marketplace/redhat-operators-kfjfl" Jan 23 07:40:11 crc kubenswrapper[4937]: I0123 07:40:11.336424 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77bcde11-13b9-42c0-a3c6-8d8ebab5302d-catalog-content\") pod \"redhat-operators-kfjfl\" (UID: \"77bcde11-13b9-42c0-a3c6-8d8ebab5302d\") " pod="openshift-marketplace/redhat-operators-kfjfl" Jan 23 07:40:11 crc kubenswrapper[4937]: I0123 07:40:11.336646 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjlzs\" (UniqueName: \"kubernetes.io/projected/77bcde11-13b9-42c0-a3c6-8d8ebab5302d-kube-api-access-zjlzs\") pod \"redhat-operators-kfjfl\" (UID: \"77bcde11-13b9-42c0-a3c6-8d8ebab5302d\") " pod="openshift-marketplace/redhat-operators-kfjfl" Jan 23 07:40:11 crc kubenswrapper[4937]: I0123 07:40:11.439082 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77bcde11-13b9-42c0-a3c6-8d8ebab5302d-utilities\") pod \"redhat-operators-kfjfl\" (UID: \"77bcde11-13b9-42c0-a3c6-8d8ebab5302d\") " pod="openshift-marketplace/redhat-operators-kfjfl" Jan 23 07:40:11 crc kubenswrapper[4937]: I0123 07:40:11.439137 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77bcde11-13b9-42c0-a3c6-8d8ebab5302d-catalog-content\") pod \"redhat-operators-kfjfl\" (UID: \"77bcde11-13b9-42c0-a3c6-8d8ebab5302d\") " pod="openshift-marketplace/redhat-operators-kfjfl" Jan 23 07:40:11 crc kubenswrapper[4937]: I0123 07:40:11.439197 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjlzs\" (UniqueName: \"kubernetes.io/projected/77bcde11-13b9-42c0-a3c6-8d8ebab5302d-kube-api-access-zjlzs\") pod \"redhat-operators-kfjfl\" (UID: \"77bcde11-13b9-42c0-a3c6-8d8ebab5302d\") " pod="openshift-marketplace/redhat-operators-kfjfl" Jan 23 07:40:11 crc kubenswrapper[4937]: I0123 07:40:11.439587 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77bcde11-13b9-42c0-a3c6-8d8ebab5302d-utilities\") pod \"redhat-operators-kfjfl\" (UID: \"77bcde11-13b9-42c0-a3c6-8d8ebab5302d\") " pod="openshift-marketplace/redhat-operators-kfjfl" Jan 23 07:40:11 crc kubenswrapper[4937]: I0123 07:40:11.439967 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77bcde11-13b9-42c0-a3c6-8d8ebab5302d-catalog-content\") pod \"redhat-operators-kfjfl\" (UID: \"77bcde11-13b9-42c0-a3c6-8d8ebab5302d\") " pod="openshift-marketplace/redhat-operators-kfjfl" Jan 23 07:40:11 crc kubenswrapper[4937]: I0123 07:40:11.462890 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjlzs\" (UniqueName: \"kubernetes.io/projected/77bcde11-13b9-42c0-a3c6-8d8ebab5302d-kube-api-access-zjlzs\") pod \"redhat-operators-kfjfl\" (UID: \"77bcde11-13b9-42c0-a3c6-8d8ebab5302d\") " pod="openshift-marketplace/redhat-operators-kfjfl" Jan 23 07:40:11 crc kubenswrapper[4937]: I0123 07:40:11.514085 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfjfl" Jan 23 07:40:12 crc kubenswrapper[4937]: I0123 07:40:12.001223 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kfjfl"] Jan 23 07:40:12 crc kubenswrapper[4937]: I0123 07:40:12.410995 4937 generic.go:334] "Generic (PLEG): container finished" podID="77bcde11-13b9-42c0-a3c6-8d8ebab5302d" containerID="69dea089f5dd66db46291e9789a6a6d01a1dbccfe0661e7da0236649545309ff" exitCode=0 Jan 23 07:40:12 crc kubenswrapper[4937]: I0123 07:40:12.411059 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfjfl" event={"ID":"77bcde11-13b9-42c0-a3c6-8d8ebab5302d","Type":"ContainerDied","Data":"69dea089f5dd66db46291e9789a6a6d01a1dbccfe0661e7da0236649545309ff"} Jan 23 07:40:12 crc kubenswrapper[4937]: I0123 07:40:12.411276 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfjfl" event={"ID":"77bcde11-13b9-42c0-a3c6-8d8ebab5302d","Type":"ContainerStarted","Data":"ab699ed2980b6e4019dab3faaa750c5581ce291a20d8aea0be8c1ddd428d16e4"} Jan 23 07:40:12 crc kubenswrapper[4937]: I0123 07:40:12.413136 4937 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 07:40:14 crc kubenswrapper[4937]: I0123 07:40:14.432564 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfjfl" event={"ID":"77bcde11-13b9-42c0-a3c6-8d8ebab5302d","Type":"ContainerStarted","Data":"205b13e8135b7e5093c24e4a5f5447443a0847ef9c32435524ef2164084011e9"} Jan 23 07:40:16 crc kubenswrapper[4937]: I0123 07:40:16.550219 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-29bnf"] Jan 23 07:40:16 crc kubenswrapper[4937]: I0123 07:40:16.554080 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29bnf" Jan 23 07:40:16 crc kubenswrapper[4937]: I0123 07:40:16.559642 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-29bnf"] Jan 23 07:40:16 crc kubenswrapper[4937]: I0123 07:40:16.660190 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57aa229d-4e24-4bae-854d-7ac895299c4f-catalog-content\") pod \"certified-operators-29bnf\" (UID: \"57aa229d-4e24-4bae-854d-7ac895299c4f\") " pod="openshift-marketplace/certified-operators-29bnf" Jan 23 07:40:16 crc kubenswrapper[4937]: I0123 07:40:16.660521 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4m4g\" (UniqueName: \"kubernetes.io/projected/57aa229d-4e24-4bae-854d-7ac895299c4f-kube-api-access-z4m4g\") pod \"certified-operators-29bnf\" (UID: \"57aa229d-4e24-4bae-854d-7ac895299c4f\") " pod="openshift-marketplace/certified-operators-29bnf" Jan 23 07:40:16 crc kubenswrapper[4937]: I0123 07:40:16.660570 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57aa229d-4e24-4bae-854d-7ac895299c4f-utilities\") pod \"certified-operators-29bnf\" (UID: \"57aa229d-4e24-4bae-854d-7ac895299c4f\") " pod="openshift-marketplace/certified-operators-29bnf" Jan 23 07:40:16 crc kubenswrapper[4937]: I0123 07:40:16.762379 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4m4g\" (UniqueName: \"kubernetes.io/projected/57aa229d-4e24-4bae-854d-7ac895299c4f-kube-api-access-z4m4g\") pod \"certified-operators-29bnf\" (UID: \"57aa229d-4e24-4bae-854d-7ac895299c4f\") " pod="openshift-marketplace/certified-operators-29bnf" Jan 23 07:40:16 crc kubenswrapper[4937]: I0123 07:40:16.762753 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57aa229d-4e24-4bae-854d-7ac895299c4f-utilities\") pod \"certified-operators-29bnf\" (UID: \"57aa229d-4e24-4bae-854d-7ac895299c4f\") " pod="openshift-marketplace/certified-operators-29bnf" Jan 23 07:40:16 crc kubenswrapper[4937]: I0123 07:40:16.762804 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57aa229d-4e24-4bae-854d-7ac895299c4f-catalog-content\") pod \"certified-operators-29bnf\" (UID: \"57aa229d-4e24-4bae-854d-7ac895299c4f\") " pod="openshift-marketplace/certified-operators-29bnf" Jan 23 07:40:16 crc kubenswrapper[4937]: I0123 07:40:16.763562 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57aa229d-4e24-4bae-854d-7ac895299c4f-catalog-content\") pod \"certified-operators-29bnf\" (UID: \"57aa229d-4e24-4bae-854d-7ac895299c4f\") " pod="openshift-marketplace/certified-operators-29bnf" Jan 23 07:40:16 crc kubenswrapper[4937]: I0123 07:40:16.765344 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57aa229d-4e24-4bae-854d-7ac895299c4f-utilities\") pod \"certified-operators-29bnf\" (UID: \"57aa229d-4e24-4bae-854d-7ac895299c4f\") " pod="openshift-marketplace/certified-operators-29bnf" Jan 23 07:40:16 crc kubenswrapper[4937]: I0123 07:40:16.800672 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4m4g\" (UniqueName: \"kubernetes.io/projected/57aa229d-4e24-4bae-854d-7ac895299c4f-kube-api-access-z4m4g\") pod \"certified-operators-29bnf\" (UID: \"57aa229d-4e24-4bae-854d-7ac895299c4f\") " pod="openshift-marketplace/certified-operators-29bnf" Jan 23 07:40:16 crc kubenswrapper[4937]: I0123 07:40:16.914490 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29bnf" Jan 23 07:40:17 crc kubenswrapper[4937]: I0123 07:40:17.460427 4937 generic.go:334] "Generic (PLEG): container finished" podID="77bcde11-13b9-42c0-a3c6-8d8ebab5302d" containerID="205b13e8135b7e5093c24e4a5f5447443a0847ef9c32435524ef2164084011e9" exitCode=0 Jan 23 07:40:17 crc kubenswrapper[4937]: I0123 07:40:17.460470 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfjfl" event={"ID":"77bcde11-13b9-42c0-a3c6-8d8ebab5302d","Type":"ContainerDied","Data":"205b13e8135b7e5093c24e4a5f5447443a0847ef9c32435524ef2164084011e9"} Jan 23 07:40:17 crc kubenswrapper[4937]: I0123 07:40:17.549476 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-29bnf"] Jan 23 07:40:17 crc kubenswrapper[4937]: W0123 07:40:17.554745 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57aa229d_4e24_4bae_854d_7ac895299c4f.slice/crio-eb15cfee95c17daabd08212b5e4e681691d31cab6892644d396f5c9f977107aa WatchSource:0}: Error finding container eb15cfee95c17daabd08212b5e4e681691d31cab6892644d396f5c9f977107aa: Status 404 returned error can't find the container with id eb15cfee95c17daabd08212b5e4e681691d31cab6892644d396f5c9f977107aa Jan 23 07:40:18 crc kubenswrapper[4937]: I0123 07:40:18.471730 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfjfl" event={"ID":"77bcde11-13b9-42c0-a3c6-8d8ebab5302d","Type":"ContainerStarted","Data":"a1c320e3eb9118a4f499aa1fb8c462522b89b8cfe5a90387f689ef78010218d1"} Jan 23 07:40:18 crc kubenswrapper[4937]: I0123 07:40:18.474832 4937 generic.go:334] "Generic (PLEG): container finished" podID="57aa229d-4e24-4bae-854d-7ac895299c4f" containerID="6f12feba00130cde80f00d8a7d4f010db87f6cefaeb0b56717d104113fd8c2d8" exitCode=0 Jan 23 07:40:18 crc kubenswrapper[4937]: I0123 07:40:18.474874 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29bnf" event={"ID":"57aa229d-4e24-4bae-854d-7ac895299c4f","Type":"ContainerDied","Data":"6f12feba00130cde80f00d8a7d4f010db87f6cefaeb0b56717d104113fd8c2d8"} Jan 23 07:40:18 crc kubenswrapper[4937]: I0123 07:40:18.474903 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29bnf" event={"ID":"57aa229d-4e24-4bae-854d-7ac895299c4f","Type":"ContainerStarted","Data":"eb15cfee95c17daabd08212b5e4e681691d31cab6892644d396f5c9f977107aa"} Jan 23 07:40:18 crc kubenswrapper[4937]: I0123 07:40:18.503261 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kfjfl" podStartSLOduration=2.047893624 podStartE2EDuration="7.503242536s" podCreationTimestamp="2026-01-23 07:40:11 +0000 UTC" firstStartedPulling="2026-01-23 07:40:12.41288543 +0000 UTC m=+4012.216652083" lastFinishedPulling="2026-01-23 07:40:17.868234342 +0000 UTC m=+4017.672000995" observedRunningTime="2026-01-23 07:40:18.492268598 +0000 UTC m=+4018.296035251" watchObservedRunningTime="2026-01-23 07:40:18.503242536 +0000 UTC m=+4018.307009189" Jan 23 07:40:18 crc kubenswrapper[4937]: I0123 07:40:18.526460 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:40:18 crc kubenswrapper[4937]: E0123 07:40:18.526748 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:40:19 crc kubenswrapper[4937]: I0123 07:40:19.488837 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29bnf" event={"ID":"57aa229d-4e24-4bae-854d-7ac895299c4f","Type":"ContainerStarted","Data":"ac70a83dab144805fe591ad5dece734a3d68bcf385e3e75a20af2767a4f47cb8"} Jan 23 07:40:21 crc kubenswrapper[4937]: I0123 07:40:21.509994 4937 generic.go:334] "Generic (PLEG): container finished" podID="57aa229d-4e24-4bae-854d-7ac895299c4f" containerID="ac70a83dab144805fe591ad5dece734a3d68bcf385e3e75a20af2767a4f47cb8" exitCode=0 Jan 23 07:40:21 crc kubenswrapper[4937]: I0123 07:40:21.510084 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29bnf" event={"ID":"57aa229d-4e24-4bae-854d-7ac895299c4f","Type":"ContainerDied","Data":"ac70a83dab144805fe591ad5dece734a3d68bcf385e3e75a20af2767a4f47cb8"} Jan 23 07:40:21 crc kubenswrapper[4937]: I0123 07:40:21.514347 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kfjfl" Jan 23 07:40:21 crc kubenswrapper[4937]: I0123 07:40:21.514375 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kfjfl" Jan 23 07:40:22 crc kubenswrapper[4937]: I0123 07:40:22.541386 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29bnf" event={"ID":"57aa229d-4e24-4bae-854d-7ac895299c4f","Type":"ContainerStarted","Data":"382addb9db0ea8c1cf56c04d672cdbaf5ca137782e00f3fe91e1cc420890430e"} Jan 23 07:40:22 crc kubenswrapper[4937]: I0123 07:40:22.570972 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kfjfl" podUID="77bcde11-13b9-42c0-a3c6-8d8ebab5302d" containerName="registry-server" probeResult="failure" output=< Jan 23 07:40:22 crc kubenswrapper[4937]: timeout: failed to connect service ":50051" within 1s Jan 23 07:40:22 crc kubenswrapper[4937]: > Jan 23 07:40:22 crc kubenswrapper[4937]: I0123 07:40:22.577053 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-29bnf" podStartSLOduration=3.1437929000000002 podStartE2EDuration="6.577018294s" podCreationTimestamp="2026-01-23 07:40:16 +0000 UTC" firstStartedPulling="2026-01-23 07:40:18.476512941 +0000 UTC m=+4018.280279594" lastFinishedPulling="2026-01-23 07:40:21.909738335 +0000 UTC m=+4021.713504988" observedRunningTime="2026-01-23 07:40:22.565652516 +0000 UTC m=+4022.369419179" watchObservedRunningTime="2026-01-23 07:40:22.577018294 +0000 UTC m=+4022.380784947" Jan 23 07:40:26 crc kubenswrapper[4937]: I0123 07:40:26.915838 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-29bnf" Jan 23 07:40:26 crc kubenswrapper[4937]: I0123 07:40:26.916303 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-29bnf" Jan 23 07:40:26 crc kubenswrapper[4937]: I0123 07:40:26.977470 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-29bnf" Jan 23 07:40:27 crc kubenswrapper[4937]: I0123 07:40:27.663701 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-29bnf" Jan 23 07:40:27 crc kubenswrapper[4937]: I0123 07:40:27.719144 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-29bnf"] Jan 23 07:40:29 crc kubenswrapper[4937]: I0123 07:40:29.601066 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-29bnf" podUID="57aa229d-4e24-4bae-854d-7ac895299c4f" containerName="registry-server" containerID="cri-o://382addb9db0ea8c1cf56c04d672cdbaf5ca137782e00f3fe91e1cc420890430e" gracePeriod=2 Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.169646 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29bnf" Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.262010 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57aa229d-4e24-4bae-854d-7ac895299c4f-utilities\") pod \"57aa229d-4e24-4bae-854d-7ac895299c4f\" (UID: \"57aa229d-4e24-4bae-854d-7ac895299c4f\") " Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.262303 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57aa229d-4e24-4bae-854d-7ac895299c4f-catalog-content\") pod \"57aa229d-4e24-4bae-854d-7ac895299c4f\" (UID: \"57aa229d-4e24-4bae-854d-7ac895299c4f\") " Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.262370 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4m4g\" (UniqueName: \"kubernetes.io/projected/57aa229d-4e24-4bae-854d-7ac895299c4f-kube-api-access-z4m4g\") pod \"57aa229d-4e24-4bae-854d-7ac895299c4f\" (UID: \"57aa229d-4e24-4bae-854d-7ac895299c4f\") " Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.263059 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57aa229d-4e24-4bae-854d-7ac895299c4f-utilities" (OuterVolumeSpecName: "utilities") pod "57aa229d-4e24-4bae-854d-7ac895299c4f" (UID: "57aa229d-4e24-4bae-854d-7ac895299c4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.264496 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57aa229d-4e24-4bae-854d-7ac895299c4f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.275280 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57aa229d-4e24-4bae-854d-7ac895299c4f-kube-api-access-z4m4g" (OuterVolumeSpecName: "kube-api-access-z4m4g") pod "57aa229d-4e24-4bae-854d-7ac895299c4f" (UID: "57aa229d-4e24-4bae-854d-7ac895299c4f"). InnerVolumeSpecName "kube-api-access-z4m4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.308236 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57aa229d-4e24-4bae-854d-7ac895299c4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57aa229d-4e24-4bae-854d-7ac895299c4f" (UID: "57aa229d-4e24-4bae-854d-7ac895299c4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.366497 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57aa229d-4e24-4bae-854d-7ac895299c4f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.366546 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4m4g\" (UniqueName: \"kubernetes.io/projected/57aa229d-4e24-4bae-854d-7ac895299c4f-kube-api-access-z4m4g\") on node \"crc\" DevicePath \"\"" Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.541209 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:40:30 crc kubenswrapper[4937]: E0123 07:40:30.541938 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.619229 4937 generic.go:334] "Generic (PLEG): container finished" podID="57aa229d-4e24-4bae-854d-7ac895299c4f" containerID="382addb9db0ea8c1cf56c04d672cdbaf5ca137782e00f3fe91e1cc420890430e" exitCode=0 Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.619291 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29bnf" event={"ID":"57aa229d-4e24-4bae-854d-7ac895299c4f","Type":"ContainerDied","Data":"382addb9db0ea8c1cf56c04d672cdbaf5ca137782e00f3fe91e1cc420890430e"} Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.619303 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29bnf" Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.619331 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29bnf" event={"ID":"57aa229d-4e24-4bae-854d-7ac895299c4f","Type":"ContainerDied","Data":"eb15cfee95c17daabd08212b5e4e681691d31cab6892644d396f5c9f977107aa"} Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.619357 4937 scope.go:117] "RemoveContainer" containerID="382addb9db0ea8c1cf56c04d672cdbaf5ca137782e00f3fe91e1cc420890430e" Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.655655 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-29bnf"] Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.656690 4937 scope.go:117] "RemoveContainer" containerID="ac70a83dab144805fe591ad5dece734a3d68bcf385e3e75a20af2767a4f47cb8" Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.667935 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-29bnf"] Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.689233 4937 scope.go:117] "RemoveContainer" containerID="6f12feba00130cde80f00d8a7d4f010db87f6cefaeb0b56717d104113fd8c2d8" Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.728699 4937 scope.go:117] "RemoveContainer" containerID="382addb9db0ea8c1cf56c04d672cdbaf5ca137782e00f3fe91e1cc420890430e" Jan 23 07:40:30 crc kubenswrapper[4937]: E0123 07:40:30.729166 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"382addb9db0ea8c1cf56c04d672cdbaf5ca137782e00f3fe91e1cc420890430e\": container with ID starting with 382addb9db0ea8c1cf56c04d672cdbaf5ca137782e00f3fe91e1cc420890430e not found: ID does not exist" containerID="382addb9db0ea8c1cf56c04d672cdbaf5ca137782e00f3fe91e1cc420890430e" Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.729202 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"382addb9db0ea8c1cf56c04d672cdbaf5ca137782e00f3fe91e1cc420890430e"} err="failed to get container status \"382addb9db0ea8c1cf56c04d672cdbaf5ca137782e00f3fe91e1cc420890430e\": rpc error: code = NotFound desc = could not find container \"382addb9db0ea8c1cf56c04d672cdbaf5ca137782e00f3fe91e1cc420890430e\": container with ID starting with 382addb9db0ea8c1cf56c04d672cdbaf5ca137782e00f3fe91e1cc420890430e not found: ID does not exist" Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.729228 4937 scope.go:117] "RemoveContainer" containerID="ac70a83dab144805fe591ad5dece734a3d68bcf385e3e75a20af2767a4f47cb8" Jan 23 07:40:30 crc kubenswrapper[4937]: E0123 07:40:30.729712 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac70a83dab144805fe591ad5dece734a3d68bcf385e3e75a20af2767a4f47cb8\": container with ID starting with ac70a83dab144805fe591ad5dece734a3d68bcf385e3e75a20af2767a4f47cb8 not found: ID does not exist" containerID="ac70a83dab144805fe591ad5dece734a3d68bcf385e3e75a20af2767a4f47cb8" Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.729746 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac70a83dab144805fe591ad5dece734a3d68bcf385e3e75a20af2767a4f47cb8"} err="failed to get container status \"ac70a83dab144805fe591ad5dece734a3d68bcf385e3e75a20af2767a4f47cb8\": rpc error: code = NotFound desc = could not find container \"ac70a83dab144805fe591ad5dece734a3d68bcf385e3e75a20af2767a4f47cb8\": container with ID starting with ac70a83dab144805fe591ad5dece734a3d68bcf385e3e75a20af2767a4f47cb8 not found: ID does not exist" Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.729766 4937 scope.go:117] "RemoveContainer" containerID="6f12feba00130cde80f00d8a7d4f010db87f6cefaeb0b56717d104113fd8c2d8" Jan 23 07:40:30 crc kubenswrapper[4937]: E0123 07:40:30.730099 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f12feba00130cde80f00d8a7d4f010db87f6cefaeb0b56717d104113fd8c2d8\": container with ID starting with 6f12feba00130cde80f00d8a7d4f010db87f6cefaeb0b56717d104113fd8c2d8 not found: ID does not exist" containerID="6f12feba00130cde80f00d8a7d4f010db87f6cefaeb0b56717d104113fd8c2d8" Jan 23 07:40:30 crc kubenswrapper[4937]: I0123 07:40:30.730128 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f12feba00130cde80f00d8a7d4f010db87f6cefaeb0b56717d104113fd8c2d8"} err="failed to get container status \"6f12feba00130cde80f00d8a7d4f010db87f6cefaeb0b56717d104113fd8c2d8\": rpc error: code = NotFound desc = could not find container \"6f12feba00130cde80f00d8a7d4f010db87f6cefaeb0b56717d104113fd8c2d8\": container with ID starting with 6f12feba00130cde80f00d8a7d4f010db87f6cefaeb0b56717d104113fd8c2d8 not found: ID does not exist" Jan 23 07:40:31 crc kubenswrapper[4937]: I0123 07:40:31.563389 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kfjfl" Jan 23 07:40:31 crc kubenswrapper[4937]: I0123 07:40:31.614663 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kfjfl" Jan 23 07:40:32 crc kubenswrapper[4937]: I0123 07:40:32.568187 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57aa229d-4e24-4bae-854d-7ac895299c4f" path="/var/lib/kubelet/pods/57aa229d-4e24-4bae-854d-7ac895299c4f/volumes" Jan 23 07:40:32 crc kubenswrapper[4937]: I0123 07:40:32.633046 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kfjfl"] Jan 23 07:40:32 crc kubenswrapper[4937]: I0123 07:40:32.642125 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kfjfl" podUID="77bcde11-13b9-42c0-a3c6-8d8ebab5302d" containerName="registry-server" containerID="cri-o://a1c320e3eb9118a4f499aa1fb8c462522b89b8cfe5a90387f689ef78010218d1" gracePeriod=2 Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.182727 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfjfl" Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.232354 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77bcde11-13b9-42c0-a3c6-8d8ebab5302d-utilities\") pod \"77bcde11-13b9-42c0-a3c6-8d8ebab5302d\" (UID: \"77bcde11-13b9-42c0-a3c6-8d8ebab5302d\") " Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.232518 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77bcde11-13b9-42c0-a3c6-8d8ebab5302d-catalog-content\") pod \"77bcde11-13b9-42c0-a3c6-8d8ebab5302d\" (UID: \"77bcde11-13b9-42c0-a3c6-8d8ebab5302d\") " Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.232672 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjlzs\" (UniqueName: \"kubernetes.io/projected/77bcde11-13b9-42c0-a3c6-8d8ebab5302d-kube-api-access-zjlzs\") pod \"77bcde11-13b9-42c0-a3c6-8d8ebab5302d\" (UID: \"77bcde11-13b9-42c0-a3c6-8d8ebab5302d\") " Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.233689 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77bcde11-13b9-42c0-a3c6-8d8ebab5302d-utilities" (OuterVolumeSpecName: "utilities") pod "77bcde11-13b9-42c0-a3c6-8d8ebab5302d" (UID: "77bcde11-13b9-42c0-a3c6-8d8ebab5302d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.241242 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77bcde11-13b9-42c0-a3c6-8d8ebab5302d-kube-api-access-zjlzs" (OuterVolumeSpecName: "kube-api-access-zjlzs") pod "77bcde11-13b9-42c0-a3c6-8d8ebab5302d" (UID: "77bcde11-13b9-42c0-a3c6-8d8ebab5302d"). InnerVolumeSpecName "kube-api-access-zjlzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.336628 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjlzs\" (UniqueName: \"kubernetes.io/projected/77bcde11-13b9-42c0-a3c6-8d8ebab5302d-kube-api-access-zjlzs\") on node \"crc\" DevicePath \"\"" Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.336668 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77bcde11-13b9-42c0-a3c6-8d8ebab5302d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.382197 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77bcde11-13b9-42c0-a3c6-8d8ebab5302d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77bcde11-13b9-42c0-a3c6-8d8ebab5302d" (UID: "77bcde11-13b9-42c0-a3c6-8d8ebab5302d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.438437 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77bcde11-13b9-42c0-a3c6-8d8ebab5302d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.655708 4937 generic.go:334] "Generic (PLEG): container finished" podID="77bcde11-13b9-42c0-a3c6-8d8ebab5302d" containerID="a1c320e3eb9118a4f499aa1fb8c462522b89b8cfe5a90387f689ef78010218d1" exitCode=0 Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.655772 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfjfl" event={"ID":"77bcde11-13b9-42c0-a3c6-8d8ebab5302d","Type":"ContainerDied","Data":"a1c320e3eb9118a4f499aa1fb8c462522b89b8cfe5a90387f689ef78010218d1"} Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.655814 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfjfl" event={"ID":"77bcde11-13b9-42c0-a3c6-8d8ebab5302d","Type":"ContainerDied","Data":"ab699ed2980b6e4019dab3faaa750c5581ce291a20d8aea0be8c1ddd428d16e4"} Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.655819 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfjfl" Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.655838 4937 scope.go:117] "RemoveContainer" containerID="a1c320e3eb9118a4f499aa1fb8c462522b89b8cfe5a90387f689ef78010218d1" Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.678441 4937 scope.go:117] "RemoveContainer" containerID="205b13e8135b7e5093c24e4a5f5447443a0847ef9c32435524ef2164084011e9" Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.700063 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kfjfl"] Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.710420 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kfjfl"] Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.714139 4937 scope.go:117] "RemoveContainer" containerID="69dea089f5dd66db46291e9789a6a6d01a1dbccfe0661e7da0236649545309ff" Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.767922 4937 scope.go:117] "RemoveContainer" containerID="a1c320e3eb9118a4f499aa1fb8c462522b89b8cfe5a90387f689ef78010218d1" Jan 23 07:40:33 crc kubenswrapper[4937]: E0123 07:40:33.768328 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1c320e3eb9118a4f499aa1fb8c462522b89b8cfe5a90387f689ef78010218d1\": container with ID starting with a1c320e3eb9118a4f499aa1fb8c462522b89b8cfe5a90387f689ef78010218d1 not found: ID does not exist" containerID="a1c320e3eb9118a4f499aa1fb8c462522b89b8cfe5a90387f689ef78010218d1" Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.768365 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1c320e3eb9118a4f499aa1fb8c462522b89b8cfe5a90387f689ef78010218d1"} err="failed to get container status \"a1c320e3eb9118a4f499aa1fb8c462522b89b8cfe5a90387f689ef78010218d1\": rpc error: code = NotFound desc = could not find container \"a1c320e3eb9118a4f499aa1fb8c462522b89b8cfe5a90387f689ef78010218d1\": container with ID starting with a1c320e3eb9118a4f499aa1fb8c462522b89b8cfe5a90387f689ef78010218d1 not found: ID does not exist" Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.768388 4937 scope.go:117] "RemoveContainer" containerID="205b13e8135b7e5093c24e4a5f5447443a0847ef9c32435524ef2164084011e9" Jan 23 07:40:33 crc kubenswrapper[4937]: E0123 07:40:33.768733 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"205b13e8135b7e5093c24e4a5f5447443a0847ef9c32435524ef2164084011e9\": container with ID starting with 205b13e8135b7e5093c24e4a5f5447443a0847ef9c32435524ef2164084011e9 not found: ID does not exist" containerID="205b13e8135b7e5093c24e4a5f5447443a0847ef9c32435524ef2164084011e9" Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.768759 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"205b13e8135b7e5093c24e4a5f5447443a0847ef9c32435524ef2164084011e9"} err="failed to get container status \"205b13e8135b7e5093c24e4a5f5447443a0847ef9c32435524ef2164084011e9\": rpc error: code = NotFound desc = could not find container \"205b13e8135b7e5093c24e4a5f5447443a0847ef9c32435524ef2164084011e9\": container with ID starting with 205b13e8135b7e5093c24e4a5f5447443a0847ef9c32435524ef2164084011e9 not found: ID does not exist" Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.768773 4937 scope.go:117] "RemoveContainer" containerID="69dea089f5dd66db46291e9789a6a6d01a1dbccfe0661e7da0236649545309ff" Jan 23 07:40:33 crc kubenswrapper[4937]: E0123 07:40:33.769036 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69dea089f5dd66db46291e9789a6a6d01a1dbccfe0661e7da0236649545309ff\": container with ID starting with 69dea089f5dd66db46291e9789a6a6d01a1dbccfe0661e7da0236649545309ff not found: ID does not exist" containerID="69dea089f5dd66db46291e9789a6a6d01a1dbccfe0661e7da0236649545309ff" Jan 23 07:40:33 crc kubenswrapper[4937]: I0123 07:40:33.769056 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69dea089f5dd66db46291e9789a6a6d01a1dbccfe0661e7da0236649545309ff"} err="failed to get container status \"69dea089f5dd66db46291e9789a6a6d01a1dbccfe0661e7da0236649545309ff\": rpc error: code = NotFound desc = could not find container \"69dea089f5dd66db46291e9789a6a6d01a1dbccfe0661e7da0236649545309ff\": container with ID starting with 69dea089f5dd66db46291e9789a6a6d01a1dbccfe0661e7da0236649545309ff not found: ID does not exist" Jan 23 07:40:34 crc kubenswrapper[4937]: I0123 07:40:34.537964 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77bcde11-13b9-42c0-a3c6-8d8ebab5302d" path="/var/lib/kubelet/pods/77bcde11-13b9-42c0-a3c6-8d8ebab5302d/volumes" Jan 23 07:40:44 crc kubenswrapper[4937]: I0123 07:40:44.528416 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:40:44 crc kubenswrapper[4937]: E0123 07:40:44.529155 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:40:57 crc kubenswrapper[4937]: I0123 07:40:57.526893 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:40:57 crc kubenswrapper[4937]: E0123 07:40:57.527579 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:41:11 crc kubenswrapper[4937]: I0123 07:41:11.527283 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:41:11 crc kubenswrapper[4937]: E0123 07:41:11.528031 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:41:25 crc kubenswrapper[4937]: I0123 07:41:25.525960 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:41:25 crc kubenswrapper[4937]: E0123 07:41:25.526816 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:41:33 crc kubenswrapper[4937]: I0123 07:41:33.761578 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="cdd3a96c-6f65-4f39-b435-78f7ceed08b5" containerName="galera" probeResult="failure" output="command timed out" Jan 23 07:41:38 crc kubenswrapper[4937]: I0123 07:41:38.526758 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:41:38 crc kubenswrapper[4937]: E0123 07:41:38.527413 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:41:49 crc kubenswrapper[4937]: I0123 07:41:49.526908 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:41:49 crc kubenswrapper[4937]: E0123 07:41:49.527682 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:42:02 crc kubenswrapper[4937]: I0123 07:42:02.526660 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:42:02 crc kubenswrapper[4937]: E0123 07:42:02.528814 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:42:13 crc kubenswrapper[4937]: I0123 07:42:13.527333 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:42:13 crc kubenswrapper[4937]: E0123 07:42:13.528220 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:42:27 crc kubenswrapper[4937]: I0123 07:42:27.527269 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:42:27 crc kubenswrapper[4937]: E0123 07:42:27.528948 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:42:38 crc kubenswrapper[4937]: I0123 07:42:38.528212 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:42:38 crc kubenswrapper[4937]: E0123 07:42:38.529676 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:42:52 crc kubenswrapper[4937]: I0123 07:42:52.526949 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:42:52 crc kubenswrapper[4937]: E0123 07:42:52.528110 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:43:04 crc kubenswrapper[4937]: I0123 07:43:04.526312 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:43:04 crc kubenswrapper[4937]: E0123 07:43:04.527102 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:43:15 crc kubenswrapper[4937]: I0123 07:43:15.527675 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:43:16 crc kubenswrapper[4937]: I0123 07:43:16.369448 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"d5deafbedd0c28b98fc3777f3738e08623e5de8076b83dcd3805927e6b60bf71"} Jan 23 07:44:03 crc kubenswrapper[4937]: I0123 07:44:03.760118 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="cdd3a96c-6f65-4f39-b435-78f7ceed08b5" containerName="galera" probeResult="failure" output="command timed out" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.637384 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-26xnt"] Jan 23 07:44:30 crc kubenswrapper[4937]: E0123 07:44:30.638755 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57aa229d-4e24-4bae-854d-7ac895299c4f" containerName="extract-utilities" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.638779 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="57aa229d-4e24-4bae-854d-7ac895299c4f" containerName="extract-utilities" Jan 23 07:44:30 crc kubenswrapper[4937]: E0123 07:44:30.638800 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57aa229d-4e24-4bae-854d-7ac895299c4f" containerName="registry-server" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.638812 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="57aa229d-4e24-4bae-854d-7ac895299c4f" containerName="registry-server" Jan 23 07:44:30 crc kubenswrapper[4937]: E0123 07:44:30.638842 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57aa229d-4e24-4bae-854d-7ac895299c4f" containerName="extract-content" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.638853 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="57aa229d-4e24-4bae-854d-7ac895299c4f" containerName="extract-content" Jan 23 07:44:30 crc kubenswrapper[4937]: E0123 07:44:30.638886 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77bcde11-13b9-42c0-a3c6-8d8ebab5302d" containerName="registry-server" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.638898 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="77bcde11-13b9-42c0-a3c6-8d8ebab5302d" containerName="registry-server" Jan 23 07:44:30 crc kubenswrapper[4937]: E0123 07:44:30.638936 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77bcde11-13b9-42c0-a3c6-8d8ebab5302d" containerName="extract-utilities" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.638952 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="77bcde11-13b9-42c0-a3c6-8d8ebab5302d" containerName="extract-utilities" Jan 23 07:44:30 crc kubenswrapper[4937]: E0123 07:44:30.638982 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77bcde11-13b9-42c0-a3c6-8d8ebab5302d" containerName="extract-content" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.638993 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="77bcde11-13b9-42c0-a3c6-8d8ebab5302d" containerName="extract-content" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.639322 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="57aa229d-4e24-4bae-854d-7ac895299c4f" containerName="registry-server" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.639363 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="77bcde11-13b9-42c0-a3c6-8d8ebab5302d" containerName="registry-server" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.641870 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26xnt" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.652509 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-26xnt"] Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.720797 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd8bf72-8d37-472d-a195-74c3ac843925-catalog-content\") pod \"redhat-marketplace-26xnt\" (UID: \"7fd8bf72-8d37-472d-a195-74c3ac843925\") " pod="openshift-marketplace/redhat-marketplace-26xnt" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.720888 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd8bf72-8d37-472d-a195-74c3ac843925-utilities\") pod \"redhat-marketplace-26xnt\" (UID: \"7fd8bf72-8d37-472d-a195-74c3ac843925\") " pod="openshift-marketplace/redhat-marketplace-26xnt" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.720915 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4q27\" (UniqueName: \"kubernetes.io/projected/7fd8bf72-8d37-472d-a195-74c3ac843925-kube-api-access-c4q27\") pod \"redhat-marketplace-26xnt\" (UID: \"7fd8bf72-8d37-472d-a195-74c3ac843925\") " pod="openshift-marketplace/redhat-marketplace-26xnt" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.823546 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd8bf72-8d37-472d-a195-74c3ac843925-catalog-content\") pod \"redhat-marketplace-26xnt\" (UID: \"7fd8bf72-8d37-472d-a195-74c3ac843925\") " pod="openshift-marketplace/redhat-marketplace-26xnt" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.823635 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd8bf72-8d37-472d-a195-74c3ac843925-utilities\") pod \"redhat-marketplace-26xnt\" (UID: \"7fd8bf72-8d37-472d-a195-74c3ac843925\") " pod="openshift-marketplace/redhat-marketplace-26xnt" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.823657 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4q27\" (UniqueName: \"kubernetes.io/projected/7fd8bf72-8d37-472d-a195-74c3ac843925-kube-api-access-c4q27\") pod \"redhat-marketplace-26xnt\" (UID: \"7fd8bf72-8d37-472d-a195-74c3ac843925\") " pod="openshift-marketplace/redhat-marketplace-26xnt" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.824130 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd8bf72-8d37-472d-a195-74c3ac843925-catalog-content\") pod \"redhat-marketplace-26xnt\" (UID: \"7fd8bf72-8d37-472d-a195-74c3ac843925\") " pod="openshift-marketplace/redhat-marketplace-26xnt" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.824327 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd8bf72-8d37-472d-a195-74c3ac843925-utilities\") pod \"redhat-marketplace-26xnt\" (UID: \"7fd8bf72-8d37-472d-a195-74c3ac843925\") " pod="openshift-marketplace/redhat-marketplace-26xnt" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.848542 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4q27\" (UniqueName: \"kubernetes.io/projected/7fd8bf72-8d37-472d-a195-74c3ac843925-kube-api-access-c4q27\") pod \"redhat-marketplace-26xnt\" (UID: \"7fd8bf72-8d37-472d-a195-74c3ac843925\") " pod="openshift-marketplace/redhat-marketplace-26xnt" Jan 23 07:44:30 crc kubenswrapper[4937]: I0123 07:44:30.970433 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26xnt" Jan 23 07:44:31 crc kubenswrapper[4937]: I0123 07:44:31.448895 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-26xnt"] Jan 23 07:44:32 crc kubenswrapper[4937]: I0123 07:44:32.206948 4937 generic.go:334] "Generic (PLEG): container finished" podID="7fd8bf72-8d37-472d-a195-74c3ac843925" containerID="d9bd26d6ac502685c791ef72e403b9d7b859e8cf26b67b104f485a985fe93d05" exitCode=0 Jan 23 07:44:32 crc kubenswrapper[4937]: I0123 07:44:32.207080 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26xnt" event={"ID":"7fd8bf72-8d37-472d-a195-74c3ac843925","Type":"ContainerDied","Data":"d9bd26d6ac502685c791ef72e403b9d7b859e8cf26b67b104f485a985fe93d05"} Jan 23 07:44:32 crc kubenswrapper[4937]: I0123 07:44:32.207282 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26xnt" event={"ID":"7fd8bf72-8d37-472d-a195-74c3ac843925","Type":"ContainerStarted","Data":"f961be40563d135b393153b74794ce65e083407980ca379212e99f3f033f94d2"} Jan 23 07:44:33 crc kubenswrapper[4937]: I0123 07:44:33.233989 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26xnt" event={"ID":"7fd8bf72-8d37-472d-a195-74c3ac843925","Type":"ContainerStarted","Data":"d311b57d7c83022c08ff9a4146235c5ee77bead38e242d55f5615b94bcc878ee"} Jan 23 07:44:34 crc kubenswrapper[4937]: I0123 07:44:34.245673 4937 generic.go:334] "Generic (PLEG): container finished" podID="7fd8bf72-8d37-472d-a195-74c3ac843925" containerID="d311b57d7c83022c08ff9a4146235c5ee77bead38e242d55f5615b94bcc878ee" exitCode=0 Jan 23 07:44:34 crc kubenswrapper[4937]: I0123 07:44:34.245875 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26xnt" event={"ID":"7fd8bf72-8d37-472d-a195-74c3ac843925","Type":"ContainerDied","Data":"d311b57d7c83022c08ff9a4146235c5ee77bead38e242d55f5615b94bcc878ee"} Jan 23 07:44:35 crc kubenswrapper[4937]: I0123 07:44:35.260030 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26xnt" event={"ID":"7fd8bf72-8d37-472d-a195-74c3ac843925","Type":"ContainerStarted","Data":"54fda0bee877d530ad819cd744ab478405c96a08c143197cea5c012c7f74d0cb"} Jan 23 07:44:35 crc kubenswrapper[4937]: I0123 07:44:35.275810 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-26xnt" podStartSLOduration=2.791593056 podStartE2EDuration="5.275790099s" podCreationTimestamp="2026-01-23 07:44:30 +0000 UTC" firstStartedPulling="2026-01-23 07:44:32.209750084 +0000 UTC m=+4272.013516777" lastFinishedPulling="2026-01-23 07:44:34.693947177 +0000 UTC m=+4274.497713820" observedRunningTime="2026-01-23 07:44:35.274778611 +0000 UTC m=+4275.078545274" watchObservedRunningTime="2026-01-23 07:44:35.275790099 +0000 UTC m=+4275.079556752" Jan 23 07:44:40 crc kubenswrapper[4937]: I0123 07:44:40.971764 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-26xnt" Jan 23 07:44:40 crc kubenswrapper[4937]: I0123 07:44:40.973338 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-26xnt" Jan 23 07:44:41 crc kubenswrapper[4937]: I0123 07:44:41.035735 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-26xnt" Jan 23 07:44:41 crc kubenswrapper[4937]: I0123 07:44:41.360068 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-26xnt" Jan 23 07:44:41 crc kubenswrapper[4937]: I0123 07:44:41.420219 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-26xnt"] Jan 23 07:44:43 crc kubenswrapper[4937]: I0123 07:44:43.334226 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-26xnt" podUID="7fd8bf72-8d37-472d-a195-74c3ac843925" containerName="registry-server" containerID="cri-o://54fda0bee877d530ad819cd744ab478405c96a08c143197cea5c012c7f74d0cb" gracePeriod=2 Jan 23 07:44:43 crc kubenswrapper[4937]: I0123 07:44:43.872432 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26xnt" Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.044976 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd8bf72-8d37-472d-a195-74c3ac843925-utilities\") pod \"7fd8bf72-8d37-472d-a195-74c3ac843925\" (UID: \"7fd8bf72-8d37-472d-a195-74c3ac843925\") " Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.045307 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd8bf72-8d37-472d-a195-74c3ac843925-catalog-content\") pod \"7fd8bf72-8d37-472d-a195-74c3ac843925\" (UID: \"7fd8bf72-8d37-472d-a195-74c3ac843925\") " Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.045375 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4q27\" (UniqueName: \"kubernetes.io/projected/7fd8bf72-8d37-472d-a195-74c3ac843925-kube-api-access-c4q27\") pod \"7fd8bf72-8d37-472d-a195-74c3ac843925\" (UID: \"7fd8bf72-8d37-472d-a195-74c3ac843925\") " Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.046530 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fd8bf72-8d37-472d-a195-74c3ac843925-utilities" (OuterVolumeSpecName: "utilities") pod "7fd8bf72-8d37-472d-a195-74c3ac843925" (UID: "7fd8bf72-8d37-472d-a195-74c3ac843925"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.055144 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd8bf72-8d37-472d-a195-74c3ac843925-kube-api-access-c4q27" (OuterVolumeSpecName: "kube-api-access-c4q27") pod "7fd8bf72-8d37-472d-a195-74c3ac843925" (UID: "7fd8bf72-8d37-472d-a195-74c3ac843925"). InnerVolumeSpecName "kube-api-access-c4q27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.073327 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fd8bf72-8d37-472d-a195-74c3ac843925-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fd8bf72-8d37-472d-a195-74c3ac843925" (UID: "7fd8bf72-8d37-472d-a195-74c3ac843925"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.149115 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fd8bf72-8d37-472d-a195-74c3ac843925-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.149187 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4q27\" (UniqueName: \"kubernetes.io/projected/7fd8bf72-8d37-472d-a195-74c3ac843925-kube-api-access-c4q27\") on node \"crc\" DevicePath \"\"" Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.149213 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fd8bf72-8d37-472d-a195-74c3ac843925-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.350912 4937 generic.go:334] "Generic (PLEG): container finished" podID="7fd8bf72-8d37-472d-a195-74c3ac843925" containerID="54fda0bee877d530ad819cd744ab478405c96a08c143197cea5c012c7f74d0cb" exitCode=0 Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.350971 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26xnt" event={"ID":"7fd8bf72-8d37-472d-a195-74c3ac843925","Type":"ContainerDied","Data":"54fda0bee877d530ad819cd744ab478405c96a08c143197cea5c012c7f74d0cb"} Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.351003 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26xnt" event={"ID":"7fd8bf72-8d37-472d-a195-74c3ac843925","Type":"ContainerDied","Data":"f961be40563d135b393153b74794ce65e083407980ca379212e99f3f033f94d2"} Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.351024 4937 scope.go:117] "RemoveContainer" containerID="54fda0bee877d530ad819cd744ab478405c96a08c143197cea5c012c7f74d0cb" Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.351080 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26xnt" Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.393639 4937 scope.go:117] "RemoveContainer" containerID="d311b57d7c83022c08ff9a4146235c5ee77bead38e242d55f5615b94bcc878ee" Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.401116 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-26xnt"] Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.413253 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-26xnt"] Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.424902 4937 scope.go:117] "RemoveContainer" containerID="d9bd26d6ac502685c791ef72e403b9d7b859e8cf26b67b104f485a985fe93d05" Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.493708 4937 scope.go:117] "RemoveContainer" containerID="54fda0bee877d530ad819cd744ab478405c96a08c143197cea5c012c7f74d0cb" Jan 23 07:44:44 crc kubenswrapper[4937]: E0123 07:44:44.494243 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54fda0bee877d530ad819cd744ab478405c96a08c143197cea5c012c7f74d0cb\": container with ID starting with 54fda0bee877d530ad819cd744ab478405c96a08c143197cea5c012c7f74d0cb not found: ID does not exist" containerID="54fda0bee877d530ad819cd744ab478405c96a08c143197cea5c012c7f74d0cb" Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.494283 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54fda0bee877d530ad819cd744ab478405c96a08c143197cea5c012c7f74d0cb"} err="failed to get container status \"54fda0bee877d530ad819cd744ab478405c96a08c143197cea5c012c7f74d0cb\": rpc error: code = NotFound desc = could not find container \"54fda0bee877d530ad819cd744ab478405c96a08c143197cea5c012c7f74d0cb\": container with ID starting with 54fda0bee877d530ad819cd744ab478405c96a08c143197cea5c012c7f74d0cb not found: ID does not exist" Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.494309 4937 scope.go:117] "RemoveContainer" containerID="d311b57d7c83022c08ff9a4146235c5ee77bead38e242d55f5615b94bcc878ee" Jan 23 07:44:44 crc kubenswrapper[4937]: E0123 07:44:44.494661 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d311b57d7c83022c08ff9a4146235c5ee77bead38e242d55f5615b94bcc878ee\": container with ID starting with d311b57d7c83022c08ff9a4146235c5ee77bead38e242d55f5615b94bcc878ee not found: ID does not exist" containerID="d311b57d7c83022c08ff9a4146235c5ee77bead38e242d55f5615b94bcc878ee" Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.494683 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d311b57d7c83022c08ff9a4146235c5ee77bead38e242d55f5615b94bcc878ee"} err="failed to get container status \"d311b57d7c83022c08ff9a4146235c5ee77bead38e242d55f5615b94bcc878ee\": rpc error: code = NotFound desc = could not find container \"d311b57d7c83022c08ff9a4146235c5ee77bead38e242d55f5615b94bcc878ee\": container with ID starting with d311b57d7c83022c08ff9a4146235c5ee77bead38e242d55f5615b94bcc878ee not found: ID does not exist" Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.494696 4937 scope.go:117] "RemoveContainer" containerID="d9bd26d6ac502685c791ef72e403b9d7b859e8cf26b67b104f485a985fe93d05" Jan 23 07:44:44 crc kubenswrapper[4937]: E0123 07:44:44.495099 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9bd26d6ac502685c791ef72e403b9d7b859e8cf26b67b104f485a985fe93d05\": container with ID starting with d9bd26d6ac502685c791ef72e403b9d7b859e8cf26b67b104f485a985fe93d05 not found: ID does not exist" containerID="d9bd26d6ac502685c791ef72e403b9d7b859e8cf26b67b104f485a985fe93d05" Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.495121 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9bd26d6ac502685c791ef72e403b9d7b859e8cf26b67b104f485a985fe93d05"} err="failed to get container status \"d9bd26d6ac502685c791ef72e403b9d7b859e8cf26b67b104f485a985fe93d05\": rpc error: code = NotFound desc = could not find container \"d9bd26d6ac502685c791ef72e403b9d7b859e8cf26b67b104f485a985fe93d05\": container with ID starting with d9bd26d6ac502685c791ef72e403b9d7b859e8cf26b67b104f485a985fe93d05 not found: ID does not exist" Jan 23 07:44:44 crc kubenswrapper[4937]: I0123 07:44:44.538832 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fd8bf72-8d37-472d-a195-74c3ac843925" path="/var/lib/kubelet/pods/7fd8bf72-8d37-472d-a195-74c3ac843925/volumes" Jan 23 07:45:00 crc kubenswrapper[4937]: I0123 07:45:00.204974 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k"] Jan 23 07:45:00 crc kubenswrapper[4937]: E0123 07:45:00.206041 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd8bf72-8d37-472d-a195-74c3ac843925" containerName="registry-server" Jan 23 07:45:00 crc kubenswrapper[4937]: I0123 07:45:00.206061 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd8bf72-8d37-472d-a195-74c3ac843925" containerName="registry-server" Jan 23 07:45:00 crc kubenswrapper[4937]: E0123 07:45:00.206085 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd8bf72-8d37-472d-a195-74c3ac843925" containerName="extract-utilities" Jan 23 07:45:00 crc kubenswrapper[4937]: I0123 07:45:00.206093 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd8bf72-8d37-472d-a195-74c3ac843925" containerName="extract-utilities" Jan 23 07:45:00 crc kubenswrapper[4937]: E0123 07:45:00.206119 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd8bf72-8d37-472d-a195-74c3ac843925" containerName="extract-content" Jan 23 07:45:00 crc kubenswrapper[4937]: I0123 07:45:00.206128 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd8bf72-8d37-472d-a195-74c3ac843925" containerName="extract-content" Jan 23 07:45:00 crc kubenswrapper[4937]: I0123 07:45:00.206375 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd8bf72-8d37-472d-a195-74c3ac843925" containerName="registry-server" Jan 23 07:45:00 crc kubenswrapper[4937]: I0123 07:45:00.207273 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k" Jan 23 07:45:00 crc kubenswrapper[4937]: I0123 07:45:00.211160 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 07:45:00 crc kubenswrapper[4937]: I0123 07:45:00.211560 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 07:45:00 crc kubenswrapper[4937]: I0123 07:45:00.249064 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k"] Jan 23 07:45:00 crc kubenswrapper[4937]: I0123 07:45:00.292163 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ece4f7f-8921-4578-aa47-537051933e2f-config-volume\") pod \"collect-profiles-29485905-dkz9k\" (UID: \"0ece4f7f-8921-4578-aa47-537051933e2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k" Jan 23 07:45:00 crc kubenswrapper[4937]: I0123 07:45:00.292386 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4l8j\" (UniqueName: \"kubernetes.io/projected/0ece4f7f-8921-4578-aa47-537051933e2f-kube-api-access-c4l8j\") pod \"collect-profiles-29485905-dkz9k\" (UID: \"0ece4f7f-8921-4578-aa47-537051933e2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k" Jan 23 07:45:00 crc kubenswrapper[4937]: I0123 07:45:00.292418 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ece4f7f-8921-4578-aa47-537051933e2f-secret-volume\") pod \"collect-profiles-29485905-dkz9k\" (UID: \"0ece4f7f-8921-4578-aa47-537051933e2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k" Jan 23 07:45:00 crc kubenswrapper[4937]: I0123 07:45:00.395227 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4l8j\" (UniqueName: \"kubernetes.io/projected/0ece4f7f-8921-4578-aa47-537051933e2f-kube-api-access-c4l8j\") pod \"collect-profiles-29485905-dkz9k\" (UID: \"0ece4f7f-8921-4578-aa47-537051933e2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k" Jan 23 07:45:00 crc kubenswrapper[4937]: I0123 07:45:00.395299 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ece4f7f-8921-4578-aa47-537051933e2f-secret-volume\") pod \"collect-profiles-29485905-dkz9k\" (UID: \"0ece4f7f-8921-4578-aa47-537051933e2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k" Jan 23 07:45:00 crc kubenswrapper[4937]: I0123 07:45:00.395494 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ece4f7f-8921-4578-aa47-537051933e2f-config-volume\") pod \"collect-profiles-29485905-dkz9k\" (UID: \"0ece4f7f-8921-4578-aa47-537051933e2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k" Jan 23 07:45:00 crc kubenswrapper[4937]: I0123 07:45:00.396474 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ece4f7f-8921-4578-aa47-537051933e2f-config-volume\") pod \"collect-profiles-29485905-dkz9k\" (UID: \"0ece4f7f-8921-4578-aa47-537051933e2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k" Jan 23 07:45:00 crc kubenswrapper[4937]: I0123 07:45:00.406259 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ece4f7f-8921-4578-aa47-537051933e2f-secret-volume\") pod \"collect-profiles-29485905-dkz9k\" (UID: \"0ece4f7f-8921-4578-aa47-537051933e2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k" Jan 23 07:45:00 crc kubenswrapper[4937]: I0123 07:45:00.411783 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4l8j\" (UniqueName: \"kubernetes.io/projected/0ece4f7f-8921-4578-aa47-537051933e2f-kube-api-access-c4l8j\") pod \"collect-profiles-29485905-dkz9k\" (UID: \"0ece4f7f-8921-4578-aa47-537051933e2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k" Jan 23 07:45:00 crc kubenswrapper[4937]: I0123 07:45:00.594247 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k" Jan 23 07:45:01 crc kubenswrapper[4937]: I0123 07:45:01.041997 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k"] Jan 23 07:45:01 crc kubenswrapper[4937]: I0123 07:45:01.503107 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k" event={"ID":"0ece4f7f-8921-4578-aa47-537051933e2f","Type":"ContainerStarted","Data":"467a380d99b57dc1da550ccb15c03ae637cba6ec4af49285310931fc549509fc"} Jan 23 07:45:01 crc kubenswrapper[4937]: I0123 07:45:01.504408 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k" event={"ID":"0ece4f7f-8921-4578-aa47-537051933e2f","Type":"ContainerStarted","Data":"60a75f9c59741023c5cedf38509940329dc86c62d6140d1ac17dee81b32f7d4d"} Jan 23 07:45:01 crc kubenswrapper[4937]: I0123 07:45:01.524507 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k" podStartSLOduration=1.5244899109999999 podStartE2EDuration="1.524489911s" podCreationTimestamp="2026-01-23 07:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:45:01.519256659 +0000 UTC m=+4301.323023312" watchObservedRunningTime="2026-01-23 07:45:01.524489911 +0000 UTC m=+4301.328256564" Jan 23 07:45:02 crc kubenswrapper[4937]: I0123 07:45:02.514055 4937 generic.go:334] "Generic (PLEG): container finished" podID="0ece4f7f-8921-4578-aa47-537051933e2f" containerID="467a380d99b57dc1da550ccb15c03ae637cba6ec4af49285310931fc549509fc" exitCode=0 Jan 23 07:45:02 crc kubenswrapper[4937]: I0123 07:45:02.514258 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k" event={"ID":"0ece4f7f-8921-4578-aa47-537051933e2f","Type":"ContainerDied","Data":"467a380d99b57dc1da550ccb15c03ae637cba6ec4af49285310931fc549509fc"} Jan 23 07:45:03 crc kubenswrapper[4937]: I0123 07:45:03.882500 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k" Jan 23 07:45:03 crc kubenswrapper[4937]: I0123 07:45:03.970487 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ece4f7f-8921-4578-aa47-537051933e2f-secret-volume\") pod \"0ece4f7f-8921-4578-aa47-537051933e2f\" (UID: \"0ece4f7f-8921-4578-aa47-537051933e2f\") " Jan 23 07:45:03 crc kubenswrapper[4937]: I0123 07:45:03.970918 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ece4f7f-8921-4578-aa47-537051933e2f-config-volume\") pod \"0ece4f7f-8921-4578-aa47-537051933e2f\" (UID: \"0ece4f7f-8921-4578-aa47-537051933e2f\") " Jan 23 07:45:03 crc kubenswrapper[4937]: I0123 07:45:03.970960 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4l8j\" (UniqueName: \"kubernetes.io/projected/0ece4f7f-8921-4578-aa47-537051933e2f-kube-api-access-c4l8j\") pod \"0ece4f7f-8921-4578-aa47-537051933e2f\" (UID: \"0ece4f7f-8921-4578-aa47-537051933e2f\") " Jan 23 07:45:03 crc kubenswrapper[4937]: I0123 07:45:03.971369 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ece4f7f-8921-4578-aa47-537051933e2f-config-volume" (OuterVolumeSpecName: "config-volume") pod "0ece4f7f-8921-4578-aa47-537051933e2f" (UID: "0ece4f7f-8921-4578-aa47-537051933e2f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:45:03 crc kubenswrapper[4937]: I0123 07:45:03.971931 4937 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ece4f7f-8921-4578-aa47-537051933e2f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 07:45:03 crc kubenswrapper[4937]: I0123 07:45:03.976841 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ece4f7f-8921-4578-aa47-537051933e2f-kube-api-access-c4l8j" (OuterVolumeSpecName: "kube-api-access-c4l8j") pod "0ece4f7f-8921-4578-aa47-537051933e2f" (UID: "0ece4f7f-8921-4578-aa47-537051933e2f"). InnerVolumeSpecName "kube-api-access-c4l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:45:03 crc kubenswrapper[4937]: I0123 07:45:03.977553 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ece4f7f-8921-4578-aa47-537051933e2f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0ece4f7f-8921-4578-aa47-537051933e2f" (UID: "0ece4f7f-8921-4578-aa47-537051933e2f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:45:04 crc kubenswrapper[4937]: I0123 07:45:04.079691 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4l8j\" (UniqueName: \"kubernetes.io/projected/0ece4f7f-8921-4578-aa47-537051933e2f-kube-api-access-c4l8j\") on node \"crc\" DevicePath \"\"" Jan 23 07:45:04 crc kubenswrapper[4937]: I0123 07:45:04.079762 4937 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ece4f7f-8921-4578-aa47-537051933e2f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 07:45:04 crc kubenswrapper[4937]: I0123 07:45:04.535452 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k" Jan 23 07:45:04 crc kubenswrapper[4937]: I0123 07:45:04.539724 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k" event={"ID":"0ece4f7f-8921-4578-aa47-537051933e2f","Type":"ContainerDied","Data":"60a75f9c59741023c5cedf38509940329dc86c62d6140d1ac17dee81b32f7d4d"} Jan 23 07:45:04 crc kubenswrapper[4937]: I0123 07:45:04.539785 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60a75f9c59741023c5cedf38509940329dc86c62d6140d1ac17dee81b32f7d4d" Jan 23 07:45:04 crc kubenswrapper[4937]: I0123 07:45:04.594031 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs"] Jan 23 07:45:04 crc kubenswrapper[4937]: I0123 07:45:04.604164 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485860-655fs"] Jan 23 07:45:06 crc kubenswrapper[4937]: I0123 07:45:06.541872 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6897fe65-100e-44a9-a48f-ddd8d84dc839" path="/var/lib/kubelet/pods/6897fe65-100e-44a9-a48f-ddd8d84dc839/volumes" Jan 23 07:45:09 crc kubenswrapper[4937]: I0123 07:45:09.275261 4937 scope.go:117] "RemoveContainer" containerID="f8a15b5c27c437ca1ed9c98c65d0ee46e1e5ac7bcdee24cafc5b1b98a4c5c99e" Jan 23 07:45:37 crc kubenswrapper[4937]: I0123 07:45:37.723957 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:45:37 crc kubenswrapper[4937]: I0123 07:45:37.725718 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:46:07 crc kubenswrapper[4937]: I0123 07:46:07.724176 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:46:07 crc kubenswrapper[4937]: I0123 07:46:07.724831 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:46:37 crc kubenswrapper[4937]: I0123 07:46:37.723864 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:46:37 crc kubenswrapper[4937]: I0123 07:46:37.724359 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:46:37 crc kubenswrapper[4937]: I0123 07:46:37.724416 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 07:46:37 crc kubenswrapper[4937]: I0123 07:46:37.725367 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d5deafbedd0c28b98fc3777f3738e08623e5de8076b83dcd3805927e6b60bf71"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:46:37 crc kubenswrapper[4937]: I0123 07:46:37.725425 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://d5deafbedd0c28b98fc3777f3738e08623e5de8076b83dcd3805927e6b60bf71" gracePeriod=600 Jan 23 07:46:38 crc kubenswrapper[4937]: I0123 07:46:38.502579 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="d5deafbedd0c28b98fc3777f3738e08623e5de8076b83dcd3805927e6b60bf71" exitCode=0 Jan 23 07:46:38 crc kubenswrapper[4937]: I0123 07:46:38.502667 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"d5deafbedd0c28b98fc3777f3738e08623e5de8076b83dcd3805927e6b60bf71"} Jan 23 07:46:38 crc kubenswrapper[4937]: I0123 07:46:38.503025 4937 scope.go:117] "RemoveContainer" containerID="f4a66e244d64f122526d551fd4ed953e6b31f1a1f49e60fb5d5b11de90d241e6" Jan 23 07:46:39 crc kubenswrapper[4937]: I0123 07:46:39.514688 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a"} Jan 23 07:46:55 crc kubenswrapper[4937]: I0123 07:46:55.975607 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-24g7t"] Jan 23 07:46:55 crc kubenswrapper[4937]: E0123 07:46:55.977238 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ece4f7f-8921-4578-aa47-537051933e2f" containerName="collect-profiles" Jan 23 07:46:55 crc kubenswrapper[4937]: I0123 07:46:55.977255 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ece4f7f-8921-4578-aa47-537051933e2f" containerName="collect-profiles" Jan 23 07:46:55 crc kubenswrapper[4937]: I0123 07:46:55.977533 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ece4f7f-8921-4578-aa47-537051933e2f" containerName="collect-profiles" Jan 23 07:46:55 crc kubenswrapper[4937]: I0123 07:46:55.979404 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-24g7t" Jan 23 07:46:55 crc kubenswrapper[4937]: I0123 07:46:55.997022 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-24g7t"] Jan 23 07:46:56 crc kubenswrapper[4937]: I0123 07:46:56.126828 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddp4d\" (UniqueName: \"kubernetes.io/projected/2f798106-bf3f-464c-9e58-b2896f7f18e0-kube-api-access-ddp4d\") pod \"community-operators-24g7t\" (UID: \"2f798106-bf3f-464c-9e58-b2896f7f18e0\") " pod="openshift-marketplace/community-operators-24g7t" Jan 23 07:46:56 crc kubenswrapper[4937]: I0123 07:46:56.126893 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f798106-bf3f-464c-9e58-b2896f7f18e0-catalog-content\") pod \"community-operators-24g7t\" (UID: \"2f798106-bf3f-464c-9e58-b2896f7f18e0\") " pod="openshift-marketplace/community-operators-24g7t" Jan 23 07:46:56 crc kubenswrapper[4937]: I0123 07:46:56.126925 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f798106-bf3f-464c-9e58-b2896f7f18e0-utilities\") pod \"community-operators-24g7t\" (UID: \"2f798106-bf3f-464c-9e58-b2896f7f18e0\") " pod="openshift-marketplace/community-operators-24g7t" Jan 23 07:46:56 crc kubenswrapper[4937]: I0123 07:46:56.229162 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddp4d\" (UniqueName: \"kubernetes.io/projected/2f798106-bf3f-464c-9e58-b2896f7f18e0-kube-api-access-ddp4d\") pod \"community-operators-24g7t\" (UID: \"2f798106-bf3f-464c-9e58-b2896f7f18e0\") " pod="openshift-marketplace/community-operators-24g7t" Jan 23 07:46:56 crc kubenswrapper[4937]: I0123 07:46:56.229247 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f798106-bf3f-464c-9e58-b2896f7f18e0-catalog-content\") pod \"community-operators-24g7t\" (UID: \"2f798106-bf3f-464c-9e58-b2896f7f18e0\") " pod="openshift-marketplace/community-operators-24g7t" Jan 23 07:46:56 crc kubenswrapper[4937]: I0123 07:46:56.229926 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f798106-bf3f-464c-9e58-b2896f7f18e0-catalog-content\") pod \"community-operators-24g7t\" (UID: \"2f798106-bf3f-464c-9e58-b2896f7f18e0\") " pod="openshift-marketplace/community-operators-24g7t" Jan 23 07:46:56 crc kubenswrapper[4937]: I0123 07:46:56.229984 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f798106-bf3f-464c-9e58-b2896f7f18e0-utilities\") pod \"community-operators-24g7t\" (UID: \"2f798106-bf3f-464c-9e58-b2896f7f18e0\") " pod="openshift-marketplace/community-operators-24g7t" Jan 23 07:46:56 crc kubenswrapper[4937]: I0123 07:46:56.230008 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f798106-bf3f-464c-9e58-b2896f7f18e0-utilities\") pod \"community-operators-24g7t\" (UID: \"2f798106-bf3f-464c-9e58-b2896f7f18e0\") " pod="openshift-marketplace/community-operators-24g7t" Jan 23 07:46:56 crc kubenswrapper[4937]: I0123 07:46:56.259389 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddp4d\" (UniqueName: \"kubernetes.io/projected/2f798106-bf3f-464c-9e58-b2896f7f18e0-kube-api-access-ddp4d\") pod \"community-operators-24g7t\" (UID: \"2f798106-bf3f-464c-9e58-b2896f7f18e0\") " pod="openshift-marketplace/community-operators-24g7t" Jan 23 07:46:56 crc kubenswrapper[4937]: I0123 07:46:56.308196 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-24g7t" Jan 23 07:46:56 crc kubenswrapper[4937]: I0123 07:46:56.855577 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-24g7t"] Jan 23 07:46:57 crc kubenswrapper[4937]: I0123 07:46:57.690632 4937 generic.go:334] "Generic (PLEG): container finished" podID="2f798106-bf3f-464c-9e58-b2896f7f18e0" containerID="ad625d356ee9309e5bfb22965a271fba380a8c63654dfdf543285fe04361acb9" exitCode=0 Jan 23 07:46:57 crc kubenswrapper[4937]: I0123 07:46:57.690743 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-24g7t" event={"ID":"2f798106-bf3f-464c-9e58-b2896f7f18e0","Type":"ContainerDied","Data":"ad625d356ee9309e5bfb22965a271fba380a8c63654dfdf543285fe04361acb9"} Jan 23 07:46:57 crc kubenswrapper[4937]: I0123 07:46:57.690971 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-24g7t" event={"ID":"2f798106-bf3f-464c-9e58-b2896f7f18e0","Type":"ContainerStarted","Data":"28511d122c19c4cd2271606330cd91053ced7eac051f9adfc3d600e140725a7c"} Jan 23 07:46:57 crc kubenswrapper[4937]: I0123 07:46:57.693893 4937 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 07:46:59 crc kubenswrapper[4937]: I0123 07:46:59.713540 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-24g7t" event={"ID":"2f798106-bf3f-464c-9e58-b2896f7f18e0","Type":"ContainerStarted","Data":"0177b459c8b161de14f57cf79114be9a4fc1f86d23abaf9f4b51da15259b48b0"} Jan 23 07:47:00 crc kubenswrapper[4937]: I0123 07:47:00.724239 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-24g7t" event={"ID":"2f798106-bf3f-464c-9e58-b2896f7f18e0","Type":"ContainerDied","Data":"0177b459c8b161de14f57cf79114be9a4fc1f86d23abaf9f4b51da15259b48b0"} Jan 23 07:47:00 crc kubenswrapper[4937]: I0123 07:47:00.724047 4937 generic.go:334] "Generic (PLEG): container finished" podID="2f798106-bf3f-464c-9e58-b2896f7f18e0" containerID="0177b459c8b161de14f57cf79114be9a4fc1f86d23abaf9f4b51da15259b48b0" exitCode=0 Jan 23 07:47:01 crc kubenswrapper[4937]: I0123 07:47:01.736320 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-24g7t" event={"ID":"2f798106-bf3f-464c-9e58-b2896f7f18e0","Type":"ContainerStarted","Data":"0133d89425f5edece859c44d7e6af30e682d630a979774ff6e16245bee4a8889"} Jan 23 07:47:06 crc kubenswrapper[4937]: I0123 07:47:06.309267 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-24g7t" Jan 23 07:47:06 crc kubenswrapper[4937]: I0123 07:47:06.309900 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-24g7t" Jan 23 07:47:06 crc kubenswrapper[4937]: I0123 07:47:06.366207 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-24g7t" Jan 23 07:47:06 crc kubenswrapper[4937]: I0123 07:47:06.392061 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-24g7t" podStartSLOduration=7.96621436 podStartE2EDuration="11.392040027s" podCreationTimestamp="2026-01-23 07:46:55 +0000 UTC" firstStartedPulling="2026-01-23 07:46:57.693622832 +0000 UTC m=+4417.497389495" lastFinishedPulling="2026-01-23 07:47:01.119448509 +0000 UTC m=+4420.923215162" observedRunningTime="2026-01-23 07:47:01.753257251 +0000 UTC m=+4421.557023924" watchObservedRunningTime="2026-01-23 07:47:06.392040027 +0000 UTC m=+4426.195806690" Jan 23 07:47:06 crc kubenswrapper[4937]: I0123 07:47:06.837680 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-24g7t" Jan 23 07:47:10 crc kubenswrapper[4937]: I0123 07:47:10.656373 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-24g7t"] Jan 23 07:47:10 crc kubenswrapper[4937]: I0123 07:47:10.657835 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-24g7t" podUID="2f798106-bf3f-464c-9e58-b2896f7f18e0" containerName="registry-server" containerID="cri-o://0133d89425f5edece859c44d7e6af30e682d630a979774ff6e16245bee4a8889" gracePeriod=2 Jan 23 07:47:10 crc kubenswrapper[4937]: I0123 07:47:10.830413 4937 generic.go:334] "Generic (PLEG): container finished" podID="2f798106-bf3f-464c-9e58-b2896f7f18e0" containerID="0133d89425f5edece859c44d7e6af30e682d630a979774ff6e16245bee4a8889" exitCode=0 Jan 23 07:47:10 crc kubenswrapper[4937]: I0123 07:47:10.830525 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-24g7t" event={"ID":"2f798106-bf3f-464c-9e58-b2896f7f18e0","Type":"ContainerDied","Data":"0133d89425f5edece859c44d7e6af30e682d630a979774ff6e16245bee4a8889"} Jan 23 07:47:12 crc kubenswrapper[4937]: I0123 07:47:12.009971 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-24g7t" Jan 23 07:47:12 crc kubenswrapper[4937]: I0123 07:47:12.089617 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f798106-bf3f-464c-9e58-b2896f7f18e0-catalog-content\") pod \"2f798106-bf3f-464c-9e58-b2896f7f18e0\" (UID: \"2f798106-bf3f-464c-9e58-b2896f7f18e0\") " Jan 23 07:47:12 crc kubenswrapper[4937]: I0123 07:47:12.089754 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f798106-bf3f-464c-9e58-b2896f7f18e0-utilities\") pod \"2f798106-bf3f-464c-9e58-b2896f7f18e0\" (UID: \"2f798106-bf3f-464c-9e58-b2896f7f18e0\") " Jan 23 07:47:12 crc kubenswrapper[4937]: I0123 07:47:12.089778 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddp4d\" (UniqueName: \"kubernetes.io/projected/2f798106-bf3f-464c-9e58-b2896f7f18e0-kube-api-access-ddp4d\") pod \"2f798106-bf3f-464c-9e58-b2896f7f18e0\" (UID: \"2f798106-bf3f-464c-9e58-b2896f7f18e0\") " Jan 23 07:47:12 crc kubenswrapper[4937]: I0123 07:47:12.106738 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f798106-bf3f-464c-9e58-b2896f7f18e0-utilities" (OuterVolumeSpecName: "utilities") pod "2f798106-bf3f-464c-9e58-b2896f7f18e0" (UID: "2f798106-bf3f-464c-9e58-b2896f7f18e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:47:12 crc kubenswrapper[4937]: I0123 07:47:12.156167 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f798106-bf3f-464c-9e58-b2896f7f18e0-kube-api-access-ddp4d" (OuterVolumeSpecName: "kube-api-access-ddp4d") pod "2f798106-bf3f-464c-9e58-b2896f7f18e0" (UID: "2f798106-bf3f-464c-9e58-b2896f7f18e0"). InnerVolumeSpecName "kube-api-access-ddp4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:47:12 crc kubenswrapper[4937]: I0123 07:47:12.169299 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f798106-bf3f-464c-9e58-b2896f7f18e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f798106-bf3f-464c-9e58-b2896f7f18e0" (UID: "2f798106-bf3f-464c-9e58-b2896f7f18e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:47:12 crc kubenswrapper[4937]: I0123 07:47:12.192953 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f798106-bf3f-464c-9e58-b2896f7f18e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:47:12 crc kubenswrapper[4937]: I0123 07:47:12.192977 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f798106-bf3f-464c-9e58-b2896f7f18e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:47:12 crc kubenswrapper[4937]: I0123 07:47:12.192986 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddp4d\" (UniqueName: \"kubernetes.io/projected/2f798106-bf3f-464c-9e58-b2896f7f18e0-kube-api-access-ddp4d\") on node \"crc\" DevicePath \"\"" Jan 23 07:47:12 crc kubenswrapper[4937]: I0123 07:47:12.864955 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-24g7t" event={"ID":"2f798106-bf3f-464c-9e58-b2896f7f18e0","Type":"ContainerDied","Data":"28511d122c19c4cd2271606330cd91053ced7eac051f9adfc3d600e140725a7c"} Jan 23 07:47:12 crc kubenswrapper[4937]: I0123 07:47:12.865396 4937 scope.go:117] "RemoveContainer" containerID="0133d89425f5edece859c44d7e6af30e682d630a979774ff6e16245bee4a8889" Jan 23 07:47:12 crc kubenswrapper[4937]: I0123 07:47:12.865086 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-24g7t" Jan 23 07:47:12 crc kubenswrapper[4937]: I0123 07:47:12.900375 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-24g7t"] Jan 23 07:47:12 crc kubenswrapper[4937]: I0123 07:47:12.901638 4937 scope.go:117] "RemoveContainer" containerID="0177b459c8b161de14f57cf79114be9a4fc1f86d23abaf9f4b51da15259b48b0" Jan 23 07:47:12 crc kubenswrapper[4937]: I0123 07:47:12.917361 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-24g7t"] Jan 23 07:47:12 crc kubenswrapper[4937]: I0123 07:47:12.933125 4937 scope.go:117] "RemoveContainer" containerID="ad625d356ee9309e5bfb22965a271fba380a8c63654dfdf543285fe04361acb9" Jan 23 07:47:14 crc kubenswrapper[4937]: I0123 07:47:14.536686 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f798106-bf3f-464c-9e58-b2896f7f18e0" path="/var/lib/kubelet/pods/2f798106-bf3f-464c-9e58-b2896f7f18e0/volumes" Jan 23 07:49:07 crc kubenswrapper[4937]: I0123 07:49:07.724027 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:49:07 crc kubenswrapper[4937]: I0123 07:49:07.724582 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:49:37 crc kubenswrapper[4937]: I0123 07:49:37.723759 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:49:37 crc kubenswrapper[4937]: I0123 07:49:37.724246 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:50:07 crc kubenswrapper[4937]: I0123 07:50:07.724516 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:50:07 crc kubenswrapper[4937]: I0123 07:50:07.725042 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:50:07 crc kubenswrapper[4937]: I0123 07:50:07.725096 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 07:50:07 crc kubenswrapper[4937]: I0123 07:50:07.726027 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:50:07 crc kubenswrapper[4937]: I0123 07:50:07.726101 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" gracePeriod=600 Jan 23 07:50:07 crc kubenswrapper[4937]: E0123 07:50:07.856932 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:50:08 crc kubenswrapper[4937]: I0123 07:50:08.665219 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" exitCode=0 Jan 23 07:50:08 crc kubenswrapper[4937]: I0123 07:50:08.665294 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a"} Jan 23 07:50:08 crc kubenswrapper[4937]: I0123 07:50:08.665623 4937 scope.go:117] "RemoveContainer" containerID="d5deafbedd0c28b98fc3777f3738e08623e5de8076b83dcd3805927e6b60bf71" Jan 23 07:50:08 crc kubenswrapper[4937]: I0123 07:50:08.666233 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:50:08 crc kubenswrapper[4937]: E0123 07:50:08.666481 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:50:13 crc kubenswrapper[4937]: I0123 07:50:13.050416 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gpnch"] Jan 23 07:50:13 crc kubenswrapper[4937]: E0123 07:50:13.051508 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f798106-bf3f-464c-9e58-b2896f7f18e0" containerName="extract-content" Jan 23 07:50:13 crc kubenswrapper[4937]: I0123 07:50:13.051527 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f798106-bf3f-464c-9e58-b2896f7f18e0" containerName="extract-content" Jan 23 07:50:13 crc kubenswrapper[4937]: E0123 07:50:13.051546 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f798106-bf3f-464c-9e58-b2896f7f18e0" containerName="registry-server" Jan 23 07:50:13 crc kubenswrapper[4937]: I0123 07:50:13.051555 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f798106-bf3f-464c-9e58-b2896f7f18e0" containerName="registry-server" Jan 23 07:50:13 crc kubenswrapper[4937]: E0123 07:50:13.051609 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f798106-bf3f-464c-9e58-b2896f7f18e0" containerName="extract-utilities" Jan 23 07:50:13 crc kubenswrapper[4937]: I0123 07:50:13.051621 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f798106-bf3f-464c-9e58-b2896f7f18e0" containerName="extract-utilities" Jan 23 07:50:13 crc kubenswrapper[4937]: I0123 07:50:13.051879 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f798106-bf3f-464c-9e58-b2896f7f18e0" containerName="registry-server" Jan 23 07:50:13 crc kubenswrapper[4937]: I0123 07:50:13.053681 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpnch" Jan 23 07:50:13 crc kubenswrapper[4937]: I0123 07:50:13.064484 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gpnch"] Jan 23 07:50:13 crc kubenswrapper[4937]: I0123 07:50:13.143556 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1baaa25-68d7-4afb-8f08-53ef363c1a4c-utilities\") pod \"redhat-operators-gpnch\" (UID: \"f1baaa25-68d7-4afb-8f08-53ef363c1a4c\") " pod="openshift-marketplace/redhat-operators-gpnch" Jan 23 07:50:13 crc kubenswrapper[4937]: I0123 07:50:13.143623 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1baaa25-68d7-4afb-8f08-53ef363c1a4c-catalog-content\") pod \"redhat-operators-gpnch\" (UID: \"f1baaa25-68d7-4afb-8f08-53ef363c1a4c\") " pod="openshift-marketplace/redhat-operators-gpnch" Jan 23 07:50:13 crc kubenswrapper[4937]: I0123 07:50:13.143887 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmq6z\" (UniqueName: \"kubernetes.io/projected/f1baaa25-68d7-4afb-8f08-53ef363c1a4c-kube-api-access-lmq6z\") pod \"redhat-operators-gpnch\" (UID: \"f1baaa25-68d7-4afb-8f08-53ef363c1a4c\") " pod="openshift-marketplace/redhat-operators-gpnch" Jan 23 07:50:13 crc kubenswrapper[4937]: I0123 07:50:13.245505 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1baaa25-68d7-4afb-8f08-53ef363c1a4c-utilities\") pod \"redhat-operators-gpnch\" (UID: \"f1baaa25-68d7-4afb-8f08-53ef363c1a4c\") " pod="openshift-marketplace/redhat-operators-gpnch" Jan 23 07:50:13 crc kubenswrapper[4937]: I0123 07:50:13.245855 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1baaa25-68d7-4afb-8f08-53ef363c1a4c-catalog-content\") pod \"redhat-operators-gpnch\" (UID: \"f1baaa25-68d7-4afb-8f08-53ef363c1a4c\") " pod="openshift-marketplace/redhat-operators-gpnch" Jan 23 07:50:13 crc kubenswrapper[4937]: I0123 07:50:13.246009 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmq6z\" (UniqueName: \"kubernetes.io/projected/f1baaa25-68d7-4afb-8f08-53ef363c1a4c-kube-api-access-lmq6z\") pod \"redhat-operators-gpnch\" (UID: \"f1baaa25-68d7-4afb-8f08-53ef363c1a4c\") " pod="openshift-marketplace/redhat-operators-gpnch" Jan 23 07:50:13 crc kubenswrapper[4937]: I0123 07:50:13.246108 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1baaa25-68d7-4afb-8f08-53ef363c1a4c-utilities\") pod \"redhat-operators-gpnch\" (UID: \"f1baaa25-68d7-4afb-8f08-53ef363c1a4c\") " pod="openshift-marketplace/redhat-operators-gpnch" Jan 23 07:50:13 crc kubenswrapper[4937]: I0123 07:50:13.246329 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1baaa25-68d7-4afb-8f08-53ef363c1a4c-catalog-content\") pod \"redhat-operators-gpnch\" (UID: \"f1baaa25-68d7-4afb-8f08-53ef363c1a4c\") " pod="openshift-marketplace/redhat-operators-gpnch" Jan 23 07:50:13 crc kubenswrapper[4937]: I0123 07:50:13.274468 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmq6z\" (UniqueName: \"kubernetes.io/projected/f1baaa25-68d7-4afb-8f08-53ef363c1a4c-kube-api-access-lmq6z\") pod \"redhat-operators-gpnch\" (UID: \"f1baaa25-68d7-4afb-8f08-53ef363c1a4c\") " pod="openshift-marketplace/redhat-operators-gpnch" Jan 23 07:50:13 crc kubenswrapper[4937]: I0123 07:50:13.378816 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpnch" Jan 23 07:50:13 crc kubenswrapper[4937]: I0123 07:50:13.878274 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gpnch"] Jan 23 07:50:14 crc kubenswrapper[4937]: I0123 07:50:14.730579 4937 generic.go:334] "Generic (PLEG): container finished" podID="f1baaa25-68d7-4afb-8f08-53ef363c1a4c" containerID="18746120dc04ab852393402f828dd418c18c21c160698d90de2d90d525b3d015" exitCode=0 Jan 23 07:50:14 crc kubenswrapper[4937]: I0123 07:50:14.730691 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpnch" event={"ID":"f1baaa25-68d7-4afb-8f08-53ef363c1a4c","Type":"ContainerDied","Data":"18746120dc04ab852393402f828dd418c18c21c160698d90de2d90d525b3d015"} Jan 23 07:50:14 crc kubenswrapper[4937]: I0123 07:50:14.730971 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpnch" event={"ID":"f1baaa25-68d7-4afb-8f08-53ef363c1a4c","Type":"ContainerStarted","Data":"622e8b90c817389da61bda961d72bc80443e4153c4fd1d9f70406e675975800e"} Jan 23 07:50:15 crc kubenswrapper[4937]: I0123 07:50:15.743714 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpnch" event={"ID":"f1baaa25-68d7-4afb-8f08-53ef363c1a4c","Type":"ContainerStarted","Data":"13fb08d8781eb6e75562ac739cd0b2b3944397936552a7d2a365c42a7fc3cf4f"} Jan 23 07:50:18 crc kubenswrapper[4937]: I0123 07:50:18.770218 4937 generic.go:334] "Generic (PLEG): container finished" podID="f1baaa25-68d7-4afb-8f08-53ef363c1a4c" containerID="13fb08d8781eb6e75562ac739cd0b2b3944397936552a7d2a365c42a7fc3cf4f" exitCode=0 Jan 23 07:50:18 crc kubenswrapper[4937]: I0123 07:50:18.770403 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpnch" event={"ID":"f1baaa25-68d7-4afb-8f08-53ef363c1a4c","Type":"ContainerDied","Data":"13fb08d8781eb6e75562ac739cd0b2b3944397936552a7d2a365c42a7fc3cf4f"} Jan 23 07:50:19 crc kubenswrapper[4937]: I0123 07:50:19.781162 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpnch" event={"ID":"f1baaa25-68d7-4afb-8f08-53ef363c1a4c","Type":"ContainerStarted","Data":"ba41b0ca698679a785d29eb750b028b7c9ed283aa3e6615aa210724ee63b80d1"} Jan 23 07:50:19 crc kubenswrapper[4937]: I0123 07:50:19.803298 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gpnch" podStartSLOduration=2.353071143 podStartE2EDuration="6.803277631s" podCreationTimestamp="2026-01-23 07:50:13 +0000 UTC" firstStartedPulling="2026-01-23 07:50:14.73281067 +0000 UTC m=+4614.536577333" lastFinishedPulling="2026-01-23 07:50:19.183017168 +0000 UTC m=+4618.986783821" observedRunningTime="2026-01-23 07:50:19.797344009 +0000 UTC m=+4619.601110672" watchObservedRunningTime="2026-01-23 07:50:19.803277631 +0000 UTC m=+4619.607044284" Jan 23 07:50:22 crc kubenswrapper[4937]: I0123 07:50:22.527005 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:50:22 crc kubenswrapper[4937]: E0123 07:50:22.527852 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:50:23 crc kubenswrapper[4937]: I0123 07:50:23.379065 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gpnch" Jan 23 07:50:23 crc kubenswrapper[4937]: I0123 07:50:23.379125 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gpnch" Jan 23 07:50:24 crc kubenswrapper[4937]: I0123 07:50:24.425009 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gpnch" podUID="f1baaa25-68d7-4afb-8f08-53ef363c1a4c" containerName="registry-server" probeResult="failure" output=< Jan 23 07:50:24 crc kubenswrapper[4937]: timeout: failed to connect service ":50051" within 1s Jan 23 07:50:24 crc kubenswrapper[4937]: > Jan 23 07:50:33 crc kubenswrapper[4937]: I0123 07:50:33.433297 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gpnch" Jan 23 07:50:33 crc kubenswrapper[4937]: I0123 07:50:33.501909 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gpnch" Jan 23 07:50:33 crc kubenswrapper[4937]: I0123 07:50:33.674910 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gpnch"] Jan 23 07:50:34 crc kubenswrapper[4937]: I0123 07:50:34.905349 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gpnch" podUID="f1baaa25-68d7-4afb-8f08-53ef363c1a4c" containerName="registry-server" containerID="cri-o://ba41b0ca698679a785d29eb750b028b7c9ed283aa3e6615aa210724ee63b80d1" gracePeriod=2 Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.526912 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:50:35 crc kubenswrapper[4937]: E0123 07:50:35.527559 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.535930 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpnch" Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.679017 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1baaa25-68d7-4afb-8f08-53ef363c1a4c-utilities\") pod \"f1baaa25-68d7-4afb-8f08-53ef363c1a4c\" (UID: \"f1baaa25-68d7-4afb-8f08-53ef363c1a4c\") " Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.679076 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmq6z\" (UniqueName: \"kubernetes.io/projected/f1baaa25-68d7-4afb-8f08-53ef363c1a4c-kube-api-access-lmq6z\") pod \"f1baaa25-68d7-4afb-8f08-53ef363c1a4c\" (UID: \"f1baaa25-68d7-4afb-8f08-53ef363c1a4c\") " Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.679271 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1baaa25-68d7-4afb-8f08-53ef363c1a4c-catalog-content\") pod \"f1baaa25-68d7-4afb-8f08-53ef363c1a4c\" (UID: \"f1baaa25-68d7-4afb-8f08-53ef363c1a4c\") " Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.680217 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1baaa25-68d7-4afb-8f08-53ef363c1a4c-utilities" (OuterVolumeSpecName: "utilities") pod "f1baaa25-68d7-4afb-8f08-53ef363c1a4c" (UID: "f1baaa25-68d7-4afb-8f08-53ef363c1a4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.685888 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1baaa25-68d7-4afb-8f08-53ef363c1a4c-kube-api-access-lmq6z" (OuterVolumeSpecName: "kube-api-access-lmq6z") pod "f1baaa25-68d7-4afb-8f08-53ef363c1a4c" (UID: "f1baaa25-68d7-4afb-8f08-53ef363c1a4c"). InnerVolumeSpecName "kube-api-access-lmq6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.781719 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1baaa25-68d7-4afb-8f08-53ef363c1a4c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.781759 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmq6z\" (UniqueName: \"kubernetes.io/projected/f1baaa25-68d7-4afb-8f08-53ef363c1a4c-kube-api-access-lmq6z\") on node \"crc\" DevicePath \"\"" Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.802503 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1baaa25-68d7-4afb-8f08-53ef363c1a4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1baaa25-68d7-4afb-8f08-53ef363c1a4c" (UID: "f1baaa25-68d7-4afb-8f08-53ef363c1a4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.883447 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1baaa25-68d7-4afb-8f08-53ef363c1a4c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.916097 4937 generic.go:334] "Generic (PLEG): container finished" podID="f1baaa25-68d7-4afb-8f08-53ef363c1a4c" containerID="ba41b0ca698679a785d29eb750b028b7c9ed283aa3e6615aa210724ee63b80d1" exitCode=0 Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.916150 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpnch" event={"ID":"f1baaa25-68d7-4afb-8f08-53ef363c1a4c","Type":"ContainerDied","Data":"ba41b0ca698679a785d29eb750b028b7c9ed283aa3e6615aa210724ee63b80d1"} Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.916183 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpnch" event={"ID":"f1baaa25-68d7-4afb-8f08-53ef363c1a4c","Type":"ContainerDied","Data":"622e8b90c817389da61bda961d72bc80443e4153c4fd1d9f70406e675975800e"} Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.916218 4937 scope.go:117] "RemoveContainer" containerID="ba41b0ca698679a785d29eb750b028b7c9ed283aa3e6615aa210724ee63b80d1" Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.916380 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpnch" Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.941732 4937 scope.go:117] "RemoveContainer" containerID="13fb08d8781eb6e75562ac739cd0b2b3944397936552a7d2a365c42a7fc3cf4f" Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.966884 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gpnch"] Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.968146 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gpnch"] Jan 23 07:50:35 crc kubenswrapper[4937]: I0123 07:50:35.976970 4937 scope.go:117] "RemoveContainer" containerID="18746120dc04ab852393402f828dd418c18c21c160698d90de2d90d525b3d015" Jan 23 07:50:36 crc kubenswrapper[4937]: I0123 07:50:36.051287 4937 scope.go:117] "RemoveContainer" containerID="ba41b0ca698679a785d29eb750b028b7c9ed283aa3e6615aa210724ee63b80d1" Jan 23 07:50:36 crc kubenswrapper[4937]: E0123 07:50:36.052503 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba41b0ca698679a785d29eb750b028b7c9ed283aa3e6615aa210724ee63b80d1\": container with ID starting with ba41b0ca698679a785d29eb750b028b7c9ed283aa3e6615aa210724ee63b80d1 not found: ID does not exist" containerID="ba41b0ca698679a785d29eb750b028b7c9ed283aa3e6615aa210724ee63b80d1" Jan 23 07:50:36 crc kubenswrapper[4937]: I0123 07:50:36.052554 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba41b0ca698679a785d29eb750b028b7c9ed283aa3e6615aa210724ee63b80d1"} err="failed to get container status \"ba41b0ca698679a785d29eb750b028b7c9ed283aa3e6615aa210724ee63b80d1\": rpc error: code = NotFound desc = could not find container \"ba41b0ca698679a785d29eb750b028b7c9ed283aa3e6615aa210724ee63b80d1\": container with ID starting with ba41b0ca698679a785d29eb750b028b7c9ed283aa3e6615aa210724ee63b80d1 not found: ID does not exist" Jan 23 07:50:36 crc kubenswrapper[4937]: I0123 07:50:36.052605 4937 scope.go:117] "RemoveContainer" containerID="13fb08d8781eb6e75562ac739cd0b2b3944397936552a7d2a365c42a7fc3cf4f" Jan 23 07:50:36 crc kubenswrapper[4937]: E0123 07:50:36.053130 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13fb08d8781eb6e75562ac739cd0b2b3944397936552a7d2a365c42a7fc3cf4f\": container with ID starting with 13fb08d8781eb6e75562ac739cd0b2b3944397936552a7d2a365c42a7fc3cf4f not found: ID does not exist" containerID="13fb08d8781eb6e75562ac739cd0b2b3944397936552a7d2a365c42a7fc3cf4f" Jan 23 07:50:36 crc kubenswrapper[4937]: I0123 07:50:36.053176 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13fb08d8781eb6e75562ac739cd0b2b3944397936552a7d2a365c42a7fc3cf4f"} err="failed to get container status \"13fb08d8781eb6e75562ac739cd0b2b3944397936552a7d2a365c42a7fc3cf4f\": rpc error: code = NotFound desc = could not find container \"13fb08d8781eb6e75562ac739cd0b2b3944397936552a7d2a365c42a7fc3cf4f\": container with ID starting with 13fb08d8781eb6e75562ac739cd0b2b3944397936552a7d2a365c42a7fc3cf4f not found: ID does not exist" Jan 23 07:50:36 crc kubenswrapper[4937]: I0123 07:50:36.053238 4937 scope.go:117] "RemoveContainer" containerID="18746120dc04ab852393402f828dd418c18c21c160698d90de2d90d525b3d015" Jan 23 07:50:36 crc kubenswrapper[4937]: E0123 07:50:36.053743 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18746120dc04ab852393402f828dd418c18c21c160698d90de2d90d525b3d015\": container with ID starting with 18746120dc04ab852393402f828dd418c18c21c160698d90de2d90d525b3d015 not found: ID does not exist" containerID="18746120dc04ab852393402f828dd418c18c21c160698d90de2d90d525b3d015" Jan 23 07:50:36 crc kubenswrapper[4937]: I0123 07:50:36.053779 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18746120dc04ab852393402f828dd418c18c21c160698d90de2d90d525b3d015"} err="failed to get container status \"18746120dc04ab852393402f828dd418c18c21c160698d90de2d90d525b3d015\": rpc error: code = NotFound desc = could not find container \"18746120dc04ab852393402f828dd418c18c21c160698d90de2d90d525b3d015\": container with ID starting with 18746120dc04ab852393402f828dd418c18c21c160698d90de2d90d525b3d015 not found: ID does not exist" Jan 23 07:50:36 crc kubenswrapper[4937]: I0123 07:50:36.555671 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1baaa25-68d7-4afb-8f08-53ef363c1a4c" path="/var/lib/kubelet/pods/f1baaa25-68d7-4afb-8f08-53ef363c1a4c/volumes" Jan 23 07:50:38 crc kubenswrapper[4937]: I0123 07:50:38.284190 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-57vf4"] Jan 23 07:50:38 crc kubenswrapper[4937]: E0123 07:50:38.285777 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1baaa25-68d7-4afb-8f08-53ef363c1a4c" containerName="registry-server" Jan 23 07:50:38 crc kubenswrapper[4937]: I0123 07:50:38.285898 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1baaa25-68d7-4afb-8f08-53ef363c1a4c" containerName="registry-server" Jan 23 07:50:38 crc kubenswrapper[4937]: E0123 07:50:38.286013 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1baaa25-68d7-4afb-8f08-53ef363c1a4c" containerName="extract-content" Jan 23 07:50:38 crc kubenswrapper[4937]: I0123 07:50:38.286085 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1baaa25-68d7-4afb-8f08-53ef363c1a4c" containerName="extract-content" Jan 23 07:50:38 crc kubenswrapper[4937]: E0123 07:50:38.286177 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1baaa25-68d7-4afb-8f08-53ef363c1a4c" containerName="extract-utilities" Jan 23 07:50:38 crc kubenswrapper[4937]: I0123 07:50:38.286250 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1baaa25-68d7-4afb-8f08-53ef363c1a4c" containerName="extract-utilities" Jan 23 07:50:38 crc kubenswrapper[4937]: I0123 07:50:38.286581 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1baaa25-68d7-4afb-8f08-53ef363c1a4c" containerName="registry-server" Jan 23 07:50:38 crc kubenswrapper[4937]: I0123 07:50:38.288684 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57vf4" Jan 23 07:50:38 crc kubenswrapper[4937]: I0123 07:50:38.297225 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-57vf4"] Jan 23 07:50:38 crc kubenswrapper[4937]: I0123 07:50:38.439440 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9712d5ba-c2ae-41fb-a89c-d5bf378d20b3-utilities\") pod \"certified-operators-57vf4\" (UID: \"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3\") " pod="openshift-marketplace/certified-operators-57vf4" Jan 23 07:50:38 crc kubenswrapper[4937]: I0123 07:50:38.439654 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9712d5ba-c2ae-41fb-a89c-d5bf378d20b3-catalog-content\") pod \"certified-operators-57vf4\" (UID: \"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3\") " pod="openshift-marketplace/certified-operators-57vf4" Jan 23 07:50:38 crc kubenswrapper[4937]: I0123 07:50:38.439830 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b6v4\" (UniqueName: \"kubernetes.io/projected/9712d5ba-c2ae-41fb-a89c-d5bf378d20b3-kube-api-access-4b6v4\") pod \"certified-operators-57vf4\" (UID: \"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3\") " pod="openshift-marketplace/certified-operators-57vf4" Jan 23 07:50:38 crc kubenswrapper[4937]: I0123 07:50:38.541864 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9712d5ba-c2ae-41fb-a89c-d5bf378d20b3-utilities\") pod \"certified-operators-57vf4\" (UID: \"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3\") " pod="openshift-marketplace/certified-operators-57vf4" Jan 23 07:50:38 crc kubenswrapper[4937]: I0123 07:50:38.541938 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9712d5ba-c2ae-41fb-a89c-d5bf378d20b3-catalog-content\") pod \"certified-operators-57vf4\" (UID: \"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3\") " pod="openshift-marketplace/certified-operators-57vf4" Jan 23 07:50:38 crc kubenswrapper[4937]: I0123 07:50:38.542011 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b6v4\" (UniqueName: \"kubernetes.io/projected/9712d5ba-c2ae-41fb-a89c-d5bf378d20b3-kube-api-access-4b6v4\") pod \"certified-operators-57vf4\" (UID: \"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3\") " pod="openshift-marketplace/certified-operators-57vf4" Jan 23 07:50:38 crc kubenswrapper[4937]: I0123 07:50:38.542446 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9712d5ba-c2ae-41fb-a89c-d5bf378d20b3-utilities\") pod \"certified-operators-57vf4\" (UID: \"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3\") " pod="openshift-marketplace/certified-operators-57vf4" Jan 23 07:50:38 crc kubenswrapper[4937]: I0123 07:50:38.542738 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9712d5ba-c2ae-41fb-a89c-d5bf378d20b3-catalog-content\") pod \"certified-operators-57vf4\" (UID: \"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3\") " pod="openshift-marketplace/certified-operators-57vf4" Jan 23 07:50:38 crc kubenswrapper[4937]: I0123 07:50:38.562875 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b6v4\" (UniqueName: \"kubernetes.io/projected/9712d5ba-c2ae-41fb-a89c-d5bf378d20b3-kube-api-access-4b6v4\") pod \"certified-operators-57vf4\" (UID: \"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3\") " pod="openshift-marketplace/certified-operators-57vf4" Jan 23 07:50:38 crc kubenswrapper[4937]: I0123 07:50:38.614849 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57vf4" Jan 23 07:50:39 crc kubenswrapper[4937]: I0123 07:50:39.154675 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-57vf4"] Jan 23 07:50:39 crc kubenswrapper[4937]: I0123 07:50:39.951320 4937 generic.go:334] "Generic (PLEG): container finished" podID="9712d5ba-c2ae-41fb-a89c-d5bf378d20b3" containerID="8b2ce51b213b305cb69c38f53f242f5fa263fcf05aa109934067cddbf56c324b" exitCode=0 Jan 23 07:50:39 crc kubenswrapper[4937]: I0123 07:50:39.951420 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57vf4" event={"ID":"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3","Type":"ContainerDied","Data":"8b2ce51b213b305cb69c38f53f242f5fa263fcf05aa109934067cddbf56c324b"} Jan 23 07:50:39 crc kubenswrapper[4937]: I0123 07:50:39.951717 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57vf4" event={"ID":"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3","Type":"ContainerStarted","Data":"627514ebd4afa47ad21686ee11ad0ad646833f39b2f222c3d679543a8a702fe0"} Jan 23 07:50:40 crc kubenswrapper[4937]: I0123 07:50:40.976869 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57vf4" event={"ID":"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3","Type":"ContainerStarted","Data":"d02939fccb25f27b0e4a7e4fdadce8dcd77a016ce06d866964ffb49e871acb24"} Jan 23 07:50:41 crc kubenswrapper[4937]: I0123 07:50:41.994456 4937 generic.go:334] "Generic (PLEG): container finished" podID="9712d5ba-c2ae-41fb-a89c-d5bf378d20b3" containerID="d02939fccb25f27b0e4a7e4fdadce8dcd77a016ce06d866964ffb49e871acb24" exitCode=0 Jan 23 07:50:41 crc kubenswrapper[4937]: I0123 07:50:41.994518 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57vf4" event={"ID":"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3","Type":"ContainerDied","Data":"d02939fccb25f27b0e4a7e4fdadce8dcd77a016ce06d866964ffb49e871acb24"} Jan 23 07:50:43 crc kubenswrapper[4937]: I0123 07:50:43.005584 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57vf4" event={"ID":"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3","Type":"ContainerStarted","Data":"ee5e97a0adfb5c6816863a1e9861c0c8112a39a7c677ecbb2f222d4696258d16"} Jan 23 07:50:43 crc kubenswrapper[4937]: I0123 07:50:43.023915 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-57vf4" podStartSLOduration=2.527455995 podStartE2EDuration="5.023896158s" podCreationTimestamp="2026-01-23 07:50:38 +0000 UTC" firstStartedPulling="2026-01-23 07:50:39.953882337 +0000 UTC m=+4639.757649000" lastFinishedPulling="2026-01-23 07:50:42.45032249 +0000 UTC m=+4642.254089163" observedRunningTime="2026-01-23 07:50:43.022854459 +0000 UTC m=+4642.826621132" watchObservedRunningTime="2026-01-23 07:50:43.023896158 +0000 UTC m=+4642.827662811" Jan 23 07:50:48 crc kubenswrapper[4937]: I0123 07:50:48.615400 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-57vf4" Jan 23 07:50:48 crc kubenswrapper[4937]: I0123 07:50:48.615997 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-57vf4" Jan 23 07:50:48 crc kubenswrapper[4937]: I0123 07:50:48.684505 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-57vf4" Jan 23 07:50:49 crc kubenswrapper[4937]: I0123 07:50:49.139753 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-57vf4" Jan 23 07:50:49 crc kubenswrapper[4937]: I0123 07:50:49.205508 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-57vf4"] Jan 23 07:50:50 crc kubenswrapper[4937]: I0123 07:50:50.535866 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:50:50 crc kubenswrapper[4937]: E0123 07:50:50.536357 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:50:51 crc kubenswrapper[4937]: I0123 07:50:51.085731 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-57vf4" podUID="9712d5ba-c2ae-41fb-a89c-d5bf378d20b3" containerName="registry-server" containerID="cri-o://ee5e97a0adfb5c6816863a1e9861c0c8112a39a7c677ecbb2f222d4696258d16" gracePeriod=2 Jan 23 07:50:51 crc kubenswrapper[4937]: I0123 07:50:51.591793 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57vf4" Jan 23 07:50:51 crc kubenswrapper[4937]: I0123 07:50:51.739288 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b6v4\" (UniqueName: \"kubernetes.io/projected/9712d5ba-c2ae-41fb-a89c-d5bf378d20b3-kube-api-access-4b6v4\") pod \"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3\" (UID: \"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3\") " Jan 23 07:50:51 crc kubenswrapper[4937]: I0123 07:50:51.739522 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9712d5ba-c2ae-41fb-a89c-d5bf378d20b3-catalog-content\") pod \"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3\" (UID: \"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3\") " Jan 23 07:50:51 crc kubenswrapper[4937]: I0123 07:50:51.739631 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9712d5ba-c2ae-41fb-a89c-d5bf378d20b3-utilities\") pod \"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3\" (UID: \"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3\") " Jan 23 07:50:51 crc kubenswrapper[4937]: I0123 07:50:51.740647 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9712d5ba-c2ae-41fb-a89c-d5bf378d20b3-utilities" (OuterVolumeSpecName: "utilities") pod "9712d5ba-c2ae-41fb-a89c-d5bf378d20b3" (UID: "9712d5ba-c2ae-41fb-a89c-d5bf378d20b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:50:51 crc kubenswrapper[4937]: I0123 07:50:51.745061 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9712d5ba-c2ae-41fb-a89c-d5bf378d20b3-kube-api-access-4b6v4" (OuterVolumeSpecName: "kube-api-access-4b6v4") pod "9712d5ba-c2ae-41fb-a89c-d5bf378d20b3" (UID: "9712d5ba-c2ae-41fb-a89c-d5bf378d20b3"). InnerVolumeSpecName "kube-api-access-4b6v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:50:51 crc kubenswrapper[4937]: I0123 07:50:51.798218 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9712d5ba-c2ae-41fb-a89c-d5bf378d20b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9712d5ba-c2ae-41fb-a89c-d5bf378d20b3" (UID: "9712d5ba-c2ae-41fb-a89c-d5bf378d20b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:50:51 crc kubenswrapper[4937]: I0123 07:50:51.841484 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b6v4\" (UniqueName: \"kubernetes.io/projected/9712d5ba-c2ae-41fb-a89c-d5bf378d20b3-kube-api-access-4b6v4\") on node \"crc\" DevicePath \"\"" Jan 23 07:50:51 crc kubenswrapper[4937]: I0123 07:50:51.841520 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9712d5ba-c2ae-41fb-a89c-d5bf378d20b3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:50:51 crc kubenswrapper[4937]: I0123 07:50:51.841529 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9712d5ba-c2ae-41fb-a89c-d5bf378d20b3-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:50:52 crc kubenswrapper[4937]: I0123 07:50:52.098777 4937 generic.go:334] "Generic (PLEG): container finished" podID="9712d5ba-c2ae-41fb-a89c-d5bf378d20b3" containerID="ee5e97a0adfb5c6816863a1e9861c0c8112a39a7c677ecbb2f222d4696258d16" exitCode=0 Jan 23 07:50:52 crc kubenswrapper[4937]: I0123 07:50:52.098848 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57vf4" Jan 23 07:50:52 crc kubenswrapper[4937]: I0123 07:50:52.098875 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57vf4" event={"ID":"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3","Type":"ContainerDied","Data":"ee5e97a0adfb5c6816863a1e9861c0c8112a39a7c677ecbb2f222d4696258d16"} Jan 23 07:50:52 crc kubenswrapper[4937]: I0123 07:50:52.099678 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57vf4" event={"ID":"9712d5ba-c2ae-41fb-a89c-d5bf378d20b3","Type":"ContainerDied","Data":"627514ebd4afa47ad21686ee11ad0ad646833f39b2f222c3d679543a8a702fe0"} Jan 23 07:50:52 crc kubenswrapper[4937]: I0123 07:50:52.099714 4937 scope.go:117] "RemoveContainer" containerID="ee5e97a0adfb5c6816863a1e9861c0c8112a39a7c677ecbb2f222d4696258d16" Jan 23 07:50:52 crc kubenswrapper[4937]: I0123 07:50:52.140816 4937 scope.go:117] "RemoveContainer" containerID="d02939fccb25f27b0e4a7e4fdadce8dcd77a016ce06d866964ffb49e871acb24" Jan 23 07:50:52 crc kubenswrapper[4937]: I0123 07:50:52.169162 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-57vf4"] Jan 23 07:50:52 crc kubenswrapper[4937]: I0123 07:50:52.183924 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-57vf4"] Jan 23 07:50:52 crc kubenswrapper[4937]: I0123 07:50:52.190930 4937 scope.go:117] "RemoveContainer" containerID="8b2ce51b213b305cb69c38f53f242f5fa263fcf05aa109934067cddbf56c324b" Jan 23 07:50:52 crc kubenswrapper[4937]: I0123 07:50:52.225240 4937 scope.go:117] "RemoveContainer" containerID="ee5e97a0adfb5c6816863a1e9861c0c8112a39a7c677ecbb2f222d4696258d16" Jan 23 07:50:52 crc kubenswrapper[4937]: E0123 07:50:52.225804 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee5e97a0adfb5c6816863a1e9861c0c8112a39a7c677ecbb2f222d4696258d16\": container with ID starting with ee5e97a0adfb5c6816863a1e9861c0c8112a39a7c677ecbb2f222d4696258d16 not found: ID does not exist" containerID="ee5e97a0adfb5c6816863a1e9861c0c8112a39a7c677ecbb2f222d4696258d16" Jan 23 07:50:52 crc kubenswrapper[4937]: I0123 07:50:52.225836 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee5e97a0adfb5c6816863a1e9861c0c8112a39a7c677ecbb2f222d4696258d16"} err="failed to get container status \"ee5e97a0adfb5c6816863a1e9861c0c8112a39a7c677ecbb2f222d4696258d16\": rpc error: code = NotFound desc = could not find container \"ee5e97a0adfb5c6816863a1e9861c0c8112a39a7c677ecbb2f222d4696258d16\": container with ID starting with ee5e97a0adfb5c6816863a1e9861c0c8112a39a7c677ecbb2f222d4696258d16 not found: ID does not exist" Jan 23 07:50:52 crc kubenswrapper[4937]: I0123 07:50:52.225857 4937 scope.go:117] "RemoveContainer" containerID="d02939fccb25f27b0e4a7e4fdadce8dcd77a016ce06d866964ffb49e871acb24" Jan 23 07:50:52 crc kubenswrapper[4937]: E0123 07:50:52.226160 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d02939fccb25f27b0e4a7e4fdadce8dcd77a016ce06d866964ffb49e871acb24\": container with ID starting with d02939fccb25f27b0e4a7e4fdadce8dcd77a016ce06d866964ffb49e871acb24 not found: ID does not exist" containerID="d02939fccb25f27b0e4a7e4fdadce8dcd77a016ce06d866964ffb49e871acb24" Jan 23 07:50:52 crc kubenswrapper[4937]: I0123 07:50:52.226181 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d02939fccb25f27b0e4a7e4fdadce8dcd77a016ce06d866964ffb49e871acb24"} err="failed to get container status \"d02939fccb25f27b0e4a7e4fdadce8dcd77a016ce06d866964ffb49e871acb24\": rpc error: code = NotFound desc = could not find container \"d02939fccb25f27b0e4a7e4fdadce8dcd77a016ce06d866964ffb49e871acb24\": container with ID starting with d02939fccb25f27b0e4a7e4fdadce8dcd77a016ce06d866964ffb49e871acb24 not found: ID does not exist" Jan 23 07:50:52 crc kubenswrapper[4937]: I0123 07:50:52.226198 4937 scope.go:117] "RemoveContainer" containerID="8b2ce51b213b305cb69c38f53f242f5fa263fcf05aa109934067cddbf56c324b" Jan 23 07:50:52 crc kubenswrapper[4937]: E0123 07:50:52.226474 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b2ce51b213b305cb69c38f53f242f5fa263fcf05aa109934067cddbf56c324b\": container with ID starting with 8b2ce51b213b305cb69c38f53f242f5fa263fcf05aa109934067cddbf56c324b not found: ID does not exist" containerID="8b2ce51b213b305cb69c38f53f242f5fa263fcf05aa109934067cddbf56c324b" Jan 23 07:50:52 crc kubenswrapper[4937]: I0123 07:50:52.226521 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b2ce51b213b305cb69c38f53f242f5fa263fcf05aa109934067cddbf56c324b"} err="failed to get container status \"8b2ce51b213b305cb69c38f53f242f5fa263fcf05aa109934067cddbf56c324b\": rpc error: code = NotFound desc = could not find container \"8b2ce51b213b305cb69c38f53f242f5fa263fcf05aa109934067cddbf56c324b\": container with ID starting with 8b2ce51b213b305cb69c38f53f242f5fa263fcf05aa109934067cddbf56c324b not found: ID does not exist" Jan 23 07:50:52 crc kubenswrapper[4937]: I0123 07:50:52.540733 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9712d5ba-c2ae-41fb-a89c-d5bf378d20b3" path="/var/lib/kubelet/pods/9712d5ba-c2ae-41fb-a89c-d5bf378d20b3/volumes" Jan 23 07:51:03 crc kubenswrapper[4937]: I0123 07:51:03.526736 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:51:03 crc kubenswrapper[4937]: E0123 07:51:03.527587 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:51:16 crc kubenswrapper[4937]: I0123 07:51:16.526895 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:51:16 crc kubenswrapper[4937]: E0123 07:51:16.527910 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:51:30 crc kubenswrapper[4937]: I0123 07:51:30.532858 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:51:30 crc kubenswrapper[4937]: E0123 07:51:30.533508 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:51:43 crc kubenswrapper[4937]: I0123 07:51:43.528268 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:51:43 crc kubenswrapper[4937]: E0123 07:51:43.529315 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:51:54 crc kubenswrapper[4937]: I0123 07:51:54.526989 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:51:54 crc kubenswrapper[4937]: E0123 07:51:54.527799 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:52:09 crc kubenswrapper[4937]: I0123 07:52:09.527513 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:52:09 crc kubenswrapper[4937]: E0123 07:52:09.528260 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:52:24 crc kubenswrapper[4937]: I0123 07:52:24.527101 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:52:24 crc kubenswrapper[4937]: E0123 07:52:24.527866 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:52:36 crc kubenswrapper[4937]: I0123 07:52:36.526280 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:52:36 crc kubenswrapper[4937]: E0123 07:52:36.526972 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:52:49 crc kubenswrapper[4937]: I0123 07:52:49.526552 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:52:49 crc kubenswrapper[4937]: E0123 07:52:49.527431 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:53:00 crc kubenswrapper[4937]: I0123 07:53:00.533526 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:53:00 crc kubenswrapper[4937]: E0123 07:53:00.534458 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:53:13 crc kubenswrapper[4937]: I0123 07:53:13.527065 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:53:13 crc kubenswrapper[4937]: E0123 07:53:13.528006 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:53:25 crc kubenswrapper[4937]: I0123 07:53:25.527308 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:53:25 crc kubenswrapper[4937]: E0123 07:53:25.528636 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:53:40 crc kubenswrapper[4937]: I0123 07:53:40.535578 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:53:40 crc kubenswrapper[4937]: E0123 07:53:40.537150 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:53:52 crc kubenswrapper[4937]: I0123 07:53:52.526637 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:53:52 crc kubenswrapper[4937]: E0123 07:53:52.527412 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:54:06 crc kubenswrapper[4937]: I0123 07:54:06.526331 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:54:06 crc kubenswrapper[4937]: E0123 07:54:06.527306 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:54:18 crc kubenswrapper[4937]: I0123 07:54:18.688873 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:54:18 crc kubenswrapper[4937]: E0123 07:54:18.690120 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:54:30 crc kubenswrapper[4937]: I0123 07:54:30.532529 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:54:30 crc kubenswrapper[4937]: E0123 07:54:30.533365 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:54:45 crc kubenswrapper[4937]: I0123 07:54:45.527067 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:54:45 crc kubenswrapper[4937]: E0123 07:54:45.528028 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:54:59 crc kubenswrapper[4937]: I0123 07:54:59.529340 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:54:59 crc kubenswrapper[4937]: E0123 07:54:59.531543 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 07:55:01 crc kubenswrapper[4937]: I0123 07:55:01.858572 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jvhv5"] Jan 23 07:55:01 crc kubenswrapper[4937]: E0123 07:55:01.859554 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9712d5ba-c2ae-41fb-a89c-d5bf378d20b3" containerName="extract-utilities" Jan 23 07:55:01 crc kubenswrapper[4937]: I0123 07:55:01.859576 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="9712d5ba-c2ae-41fb-a89c-d5bf378d20b3" containerName="extract-utilities" Jan 23 07:55:01 crc kubenswrapper[4937]: E0123 07:55:01.859672 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9712d5ba-c2ae-41fb-a89c-d5bf378d20b3" containerName="extract-content" Jan 23 07:55:01 crc kubenswrapper[4937]: I0123 07:55:01.859686 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="9712d5ba-c2ae-41fb-a89c-d5bf378d20b3" containerName="extract-content" Jan 23 07:55:01 crc kubenswrapper[4937]: E0123 07:55:01.859711 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9712d5ba-c2ae-41fb-a89c-d5bf378d20b3" containerName="registry-server" Jan 23 07:55:01 crc kubenswrapper[4937]: I0123 07:55:01.859722 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="9712d5ba-c2ae-41fb-a89c-d5bf378d20b3" containerName="registry-server" Jan 23 07:55:01 crc kubenswrapper[4937]: I0123 07:55:01.860054 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="9712d5ba-c2ae-41fb-a89c-d5bf378d20b3" containerName="registry-server" Jan 23 07:55:01 crc kubenswrapper[4937]: I0123 07:55:01.862441 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvhv5" Jan 23 07:55:01 crc kubenswrapper[4937]: I0123 07:55:01.884814 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvhv5"] Jan 23 07:55:02 crc kubenswrapper[4937]: I0123 07:55:02.002776 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b17d0906-f239-499b-b9cd-a91b172b227b-catalog-content\") pod \"redhat-marketplace-jvhv5\" (UID: \"b17d0906-f239-499b-b9cd-a91b172b227b\") " pod="openshift-marketplace/redhat-marketplace-jvhv5" Jan 23 07:55:02 crc kubenswrapper[4937]: I0123 07:55:02.002842 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b17d0906-f239-499b-b9cd-a91b172b227b-utilities\") pod \"redhat-marketplace-jvhv5\" (UID: \"b17d0906-f239-499b-b9cd-a91b172b227b\") " pod="openshift-marketplace/redhat-marketplace-jvhv5" Jan 23 07:55:02 crc kubenswrapper[4937]: I0123 07:55:02.002936 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhmjd\" (UniqueName: \"kubernetes.io/projected/b17d0906-f239-499b-b9cd-a91b172b227b-kube-api-access-nhmjd\") pod \"redhat-marketplace-jvhv5\" (UID: \"b17d0906-f239-499b-b9cd-a91b172b227b\") " pod="openshift-marketplace/redhat-marketplace-jvhv5" Jan 23 07:55:02 crc kubenswrapper[4937]: I0123 07:55:02.104957 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b17d0906-f239-499b-b9cd-a91b172b227b-catalog-content\") pod \"redhat-marketplace-jvhv5\" (UID: \"b17d0906-f239-499b-b9cd-a91b172b227b\") " pod="openshift-marketplace/redhat-marketplace-jvhv5" Jan 23 07:55:02 crc kubenswrapper[4937]: I0123 07:55:02.105027 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b17d0906-f239-499b-b9cd-a91b172b227b-utilities\") pod \"redhat-marketplace-jvhv5\" (UID: \"b17d0906-f239-499b-b9cd-a91b172b227b\") " pod="openshift-marketplace/redhat-marketplace-jvhv5" Jan 23 07:55:02 crc kubenswrapper[4937]: I0123 07:55:02.105069 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhmjd\" (UniqueName: \"kubernetes.io/projected/b17d0906-f239-499b-b9cd-a91b172b227b-kube-api-access-nhmjd\") pod \"redhat-marketplace-jvhv5\" (UID: \"b17d0906-f239-499b-b9cd-a91b172b227b\") " pod="openshift-marketplace/redhat-marketplace-jvhv5" Jan 23 07:55:02 crc kubenswrapper[4937]: I0123 07:55:02.105477 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b17d0906-f239-499b-b9cd-a91b172b227b-catalog-content\") pod \"redhat-marketplace-jvhv5\" (UID: \"b17d0906-f239-499b-b9cd-a91b172b227b\") " pod="openshift-marketplace/redhat-marketplace-jvhv5" Jan 23 07:55:02 crc kubenswrapper[4937]: I0123 07:55:02.105516 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b17d0906-f239-499b-b9cd-a91b172b227b-utilities\") pod \"redhat-marketplace-jvhv5\" (UID: \"b17d0906-f239-499b-b9cd-a91b172b227b\") " pod="openshift-marketplace/redhat-marketplace-jvhv5" Jan 23 07:55:02 crc kubenswrapper[4937]: I0123 07:55:02.136540 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhmjd\" (UniqueName: \"kubernetes.io/projected/b17d0906-f239-499b-b9cd-a91b172b227b-kube-api-access-nhmjd\") pod \"redhat-marketplace-jvhv5\" (UID: \"b17d0906-f239-499b-b9cd-a91b172b227b\") " pod="openshift-marketplace/redhat-marketplace-jvhv5" Jan 23 07:55:02 crc kubenswrapper[4937]: I0123 07:55:02.193016 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvhv5" Jan 23 07:55:02 crc kubenswrapper[4937]: I0123 07:55:02.718326 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvhv5"] Jan 23 07:55:03 crc kubenswrapper[4937]: I0123 07:55:03.718303 4937 generic.go:334] "Generic (PLEG): container finished" podID="b17d0906-f239-499b-b9cd-a91b172b227b" containerID="be493fdb023cfdbf0102b001167787a5db0c43d46b05c61dacf736751c061a3f" exitCode=0 Jan 23 07:55:03 crc kubenswrapper[4937]: I0123 07:55:03.718346 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvhv5" event={"ID":"b17d0906-f239-499b-b9cd-a91b172b227b","Type":"ContainerDied","Data":"be493fdb023cfdbf0102b001167787a5db0c43d46b05c61dacf736751c061a3f"} Jan 23 07:55:03 crc kubenswrapper[4937]: I0123 07:55:03.718626 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvhv5" event={"ID":"b17d0906-f239-499b-b9cd-a91b172b227b","Type":"ContainerStarted","Data":"295e3528445b4a9410ffdceb3063a4152a2c1034f09757d56e92b29ffddad081"} Jan 23 07:55:03 crc kubenswrapper[4937]: I0123 07:55:03.721507 4937 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 07:55:04 crc kubenswrapper[4937]: I0123 07:55:04.731542 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvhv5" event={"ID":"b17d0906-f239-499b-b9cd-a91b172b227b","Type":"ContainerStarted","Data":"cf57f628e0e40d264634a73d0afb24b31b5f4e623a1741137a1f12632f3b4168"} Jan 23 07:55:05 crc kubenswrapper[4937]: I0123 07:55:05.744236 4937 generic.go:334] "Generic (PLEG): container finished" podID="b17d0906-f239-499b-b9cd-a91b172b227b" containerID="cf57f628e0e40d264634a73d0afb24b31b5f4e623a1741137a1f12632f3b4168" exitCode=0 Jan 23 07:55:05 crc kubenswrapper[4937]: I0123 07:55:05.744310 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvhv5" event={"ID":"b17d0906-f239-499b-b9cd-a91b172b227b","Type":"ContainerDied","Data":"cf57f628e0e40d264634a73d0afb24b31b5f4e623a1741137a1f12632f3b4168"} Jan 23 07:55:06 crc kubenswrapper[4937]: I0123 07:55:06.755940 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvhv5" event={"ID":"b17d0906-f239-499b-b9cd-a91b172b227b","Type":"ContainerStarted","Data":"75bd76a2135c3862f78c89c3ab4621bf08c6254fedf9fc53f9b490ee5a116aa4"} Jan 23 07:55:06 crc kubenswrapper[4937]: I0123 07:55:06.777857 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jvhv5" podStartSLOduration=3.347553493 podStartE2EDuration="5.777838264s" podCreationTimestamp="2026-01-23 07:55:01 +0000 UTC" firstStartedPulling="2026-01-23 07:55:03.720972857 +0000 UTC m=+4903.524739510" lastFinishedPulling="2026-01-23 07:55:06.151257628 +0000 UTC m=+4905.955024281" observedRunningTime="2026-01-23 07:55:06.775585863 +0000 UTC m=+4906.579352516" watchObservedRunningTime="2026-01-23 07:55:06.777838264 +0000 UTC m=+4906.581604917" Jan 23 07:55:12 crc kubenswrapper[4937]: I0123 07:55:12.193500 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jvhv5" Jan 23 07:55:12 crc kubenswrapper[4937]: I0123 07:55:12.194037 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jvhv5" Jan 23 07:55:12 crc kubenswrapper[4937]: I0123 07:55:12.238790 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jvhv5" Jan 23 07:55:12 crc kubenswrapper[4937]: I0123 07:55:12.527525 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:55:12 crc kubenswrapper[4937]: I0123 07:55:12.837747 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"2750c8793978e6751989bf1064df680550e4c785e14b6869bda1eb315f35a26e"} Jan 23 07:55:12 crc kubenswrapper[4937]: I0123 07:55:12.898140 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jvhv5" Jan 23 07:55:12 crc kubenswrapper[4937]: I0123 07:55:12.956301 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvhv5"] Jan 23 07:55:14 crc kubenswrapper[4937]: I0123 07:55:14.851339 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jvhv5" podUID="b17d0906-f239-499b-b9cd-a91b172b227b" containerName="registry-server" containerID="cri-o://75bd76a2135c3862f78c89c3ab4621bf08c6254fedf9fc53f9b490ee5a116aa4" gracePeriod=2 Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.626766 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvhv5" Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.790440 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhmjd\" (UniqueName: \"kubernetes.io/projected/b17d0906-f239-499b-b9cd-a91b172b227b-kube-api-access-nhmjd\") pod \"b17d0906-f239-499b-b9cd-a91b172b227b\" (UID: \"b17d0906-f239-499b-b9cd-a91b172b227b\") " Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.790746 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b17d0906-f239-499b-b9cd-a91b172b227b-utilities\") pod \"b17d0906-f239-499b-b9cd-a91b172b227b\" (UID: \"b17d0906-f239-499b-b9cd-a91b172b227b\") " Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.790904 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b17d0906-f239-499b-b9cd-a91b172b227b-catalog-content\") pod \"b17d0906-f239-499b-b9cd-a91b172b227b\" (UID: \"b17d0906-f239-499b-b9cd-a91b172b227b\") " Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.791671 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b17d0906-f239-499b-b9cd-a91b172b227b-utilities" (OuterVolumeSpecName: "utilities") pod "b17d0906-f239-499b-b9cd-a91b172b227b" (UID: "b17d0906-f239-499b-b9cd-a91b172b227b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.800393 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b17d0906-f239-499b-b9cd-a91b172b227b-kube-api-access-nhmjd" (OuterVolumeSpecName: "kube-api-access-nhmjd") pod "b17d0906-f239-499b-b9cd-a91b172b227b" (UID: "b17d0906-f239-499b-b9cd-a91b172b227b"). InnerVolumeSpecName "kube-api-access-nhmjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.820829 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b17d0906-f239-499b-b9cd-a91b172b227b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b17d0906-f239-499b-b9cd-a91b172b227b" (UID: "b17d0906-f239-499b-b9cd-a91b172b227b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.863037 4937 generic.go:334] "Generic (PLEG): container finished" podID="b17d0906-f239-499b-b9cd-a91b172b227b" containerID="75bd76a2135c3862f78c89c3ab4621bf08c6254fedf9fc53f9b490ee5a116aa4" exitCode=0 Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.863081 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvhv5" event={"ID":"b17d0906-f239-499b-b9cd-a91b172b227b","Type":"ContainerDied","Data":"75bd76a2135c3862f78c89c3ab4621bf08c6254fedf9fc53f9b490ee5a116aa4"} Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.863093 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jvhv5" Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.863111 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jvhv5" event={"ID":"b17d0906-f239-499b-b9cd-a91b172b227b","Type":"ContainerDied","Data":"295e3528445b4a9410ffdceb3063a4152a2c1034f09757d56e92b29ffddad081"} Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.863129 4937 scope.go:117] "RemoveContainer" containerID="75bd76a2135c3862f78c89c3ab4621bf08c6254fedf9fc53f9b490ee5a116aa4" Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.893267 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b17d0906-f239-499b-b9cd-a91b172b227b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.893314 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhmjd\" (UniqueName: \"kubernetes.io/projected/b17d0906-f239-499b-b9cd-a91b172b227b-kube-api-access-nhmjd\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.893329 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b17d0906-f239-499b-b9cd-a91b172b227b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.893557 4937 scope.go:117] "RemoveContainer" containerID="cf57f628e0e40d264634a73d0afb24b31b5f4e623a1741137a1f12632f3b4168" Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.897407 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvhv5"] Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.907747 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jvhv5"] Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.918337 4937 scope.go:117] "RemoveContainer" containerID="be493fdb023cfdbf0102b001167787a5db0c43d46b05c61dacf736751c061a3f" Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.961701 4937 scope.go:117] "RemoveContainer" containerID="75bd76a2135c3862f78c89c3ab4621bf08c6254fedf9fc53f9b490ee5a116aa4" Jan 23 07:55:15 crc kubenswrapper[4937]: E0123 07:55:15.962269 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75bd76a2135c3862f78c89c3ab4621bf08c6254fedf9fc53f9b490ee5a116aa4\": container with ID starting with 75bd76a2135c3862f78c89c3ab4621bf08c6254fedf9fc53f9b490ee5a116aa4 not found: ID does not exist" containerID="75bd76a2135c3862f78c89c3ab4621bf08c6254fedf9fc53f9b490ee5a116aa4" Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.962318 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75bd76a2135c3862f78c89c3ab4621bf08c6254fedf9fc53f9b490ee5a116aa4"} err="failed to get container status \"75bd76a2135c3862f78c89c3ab4621bf08c6254fedf9fc53f9b490ee5a116aa4\": rpc error: code = NotFound desc = could not find container \"75bd76a2135c3862f78c89c3ab4621bf08c6254fedf9fc53f9b490ee5a116aa4\": container with ID starting with 75bd76a2135c3862f78c89c3ab4621bf08c6254fedf9fc53f9b490ee5a116aa4 not found: ID does not exist" Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.962345 4937 scope.go:117] "RemoveContainer" containerID="cf57f628e0e40d264634a73d0afb24b31b5f4e623a1741137a1f12632f3b4168" Jan 23 07:55:15 crc kubenswrapper[4937]: E0123 07:55:15.962559 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf57f628e0e40d264634a73d0afb24b31b5f4e623a1741137a1f12632f3b4168\": container with ID starting with cf57f628e0e40d264634a73d0afb24b31b5f4e623a1741137a1f12632f3b4168 not found: ID does not exist" containerID="cf57f628e0e40d264634a73d0afb24b31b5f4e623a1741137a1f12632f3b4168" Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.962612 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf57f628e0e40d264634a73d0afb24b31b5f4e623a1741137a1f12632f3b4168"} err="failed to get container status \"cf57f628e0e40d264634a73d0afb24b31b5f4e623a1741137a1f12632f3b4168\": rpc error: code = NotFound desc = could not find container \"cf57f628e0e40d264634a73d0afb24b31b5f4e623a1741137a1f12632f3b4168\": container with ID starting with cf57f628e0e40d264634a73d0afb24b31b5f4e623a1741137a1f12632f3b4168 not found: ID does not exist" Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.962632 4937 scope.go:117] "RemoveContainer" containerID="be493fdb023cfdbf0102b001167787a5db0c43d46b05c61dacf736751c061a3f" Jan 23 07:55:15 crc kubenswrapper[4937]: E0123 07:55:15.962987 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be493fdb023cfdbf0102b001167787a5db0c43d46b05c61dacf736751c061a3f\": container with ID starting with be493fdb023cfdbf0102b001167787a5db0c43d46b05c61dacf736751c061a3f not found: ID does not exist" containerID="be493fdb023cfdbf0102b001167787a5db0c43d46b05c61dacf736751c061a3f" Jan 23 07:55:15 crc kubenswrapper[4937]: I0123 07:55:15.963016 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be493fdb023cfdbf0102b001167787a5db0c43d46b05c61dacf736751c061a3f"} err="failed to get container status \"be493fdb023cfdbf0102b001167787a5db0c43d46b05c61dacf736751c061a3f\": rpc error: code = NotFound desc = could not find container \"be493fdb023cfdbf0102b001167787a5db0c43d46b05c61dacf736751c061a3f\": container with ID starting with be493fdb023cfdbf0102b001167787a5db0c43d46b05c61dacf736751c061a3f not found: ID does not exist" Jan 23 07:55:16 crc kubenswrapper[4937]: I0123 07:55:16.547220 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b17d0906-f239-499b-b9cd-a91b172b227b" path="/var/lib/kubelet/pods/b17d0906-f239-499b-b9cd-a91b172b227b/volumes" Jan 23 07:57:37 crc kubenswrapper[4937]: I0123 07:57:37.723609 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:57:37 crc kubenswrapper[4937]: I0123 07:57:37.724285 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:58:07 crc kubenswrapper[4937]: I0123 07:58:07.724175 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:58:07 crc kubenswrapper[4937]: I0123 07:58:07.726802 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:58:08 crc kubenswrapper[4937]: I0123 07:58:08.156208 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p9hvb"] Jan 23 07:58:08 crc kubenswrapper[4937]: E0123 07:58:08.156986 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17d0906-f239-499b-b9cd-a91b172b227b" containerName="extract-utilities" Jan 23 07:58:08 crc kubenswrapper[4937]: I0123 07:58:08.157010 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17d0906-f239-499b-b9cd-a91b172b227b" containerName="extract-utilities" Jan 23 07:58:08 crc kubenswrapper[4937]: E0123 07:58:08.157033 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17d0906-f239-499b-b9cd-a91b172b227b" containerName="registry-server" Jan 23 07:58:08 crc kubenswrapper[4937]: I0123 07:58:08.157041 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17d0906-f239-499b-b9cd-a91b172b227b" containerName="registry-server" Jan 23 07:58:08 crc kubenswrapper[4937]: E0123 07:58:08.157055 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17d0906-f239-499b-b9cd-a91b172b227b" containerName="extract-content" Jan 23 07:58:08 crc kubenswrapper[4937]: I0123 07:58:08.157060 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17d0906-f239-499b-b9cd-a91b172b227b" containerName="extract-content" Jan 23 07:58:08 crc kubenswrapper[4937]: I0123 07:58:08.157309 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="b17d0906-f239-499b-b9cd-a91b172b227b" containerName="registry-server" Jan 23 07:58:08 crc kubenswrapper[4937]: I0123 07:58:08.159058 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9hvb" Jan 23 07:58:08 crc kubenswrapper[4937]: I0123 07:58:08.194750 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9hvb"] Jan 23 07:58:08 crc kubenswrapper[4937]: I0123 07:58:08.287002 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da2c4cd-0975-4bf2-8a41-d19123d1680f-utilities\") pod \"community-operators-p9hvb\" (UID: \"8da2c4cd-0975-4bf2-8a41-d19123d1680f\") " pod="openshift-marketplace/community-operators-p9hvb" Jan 23 07:58:08 crc kubenswrapper[4937]: I0123 07:58:08.287075 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da2c4cd-0975-4bf2-8a41-d19123d1680f-catalog-content\") pod \"community-operators-p9hvb\" (UID: \"8da2c4cd-0975-4bf2-8a41-d19123d1680f\") " pod="openshift-marketplace/community-operators-p9hvb" Jan 23 07:58:08 crc kubenswrapper[4937]: I0123 07:58:08.287129 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frb69\" (UniqueName: \"kubernetes.io/projected/8da2c4cd-0975-4bf2-8a41-d19123d1680f-kube-api-access-frb69\") pod \"community-operators-p9hvb\" (UID: \"8da2c4cd-0975-4bf2-8a41-d19123d1680f\") " pod="openshift-marketplace/community-operators-p9hvb" Jan 23 07:58:08 crc kubenswrapper[4937]: I0123 07:58:08.388664 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da2c4cd-0975-4bf2-8a41-d19123d1680f-utilities\") pod \"community-operators-p9hvb\" (UID: \"8da2c4cd-0975-4bf2-8a41-d19123d1680f\") " pod="openshift-marketplace/community-operators-p9hvb" Jan 23 07:58:08 crc kubenswrapper[4937]: I0123 07:58:08.388733 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da2c4cd-0975-4bf2-8a41-d19123d1680f-catalog-content\") pod \"community-operators-p9hvb\" (UID: \"8da2c4cd-0975-4bf2-8a41-d19123d1680f\") " pod="openshift-marketplace/community-operators-p9hvb" Jan 23 07:58:08 crc kubenswrapper[4937]: I0123 07:58:08.388797 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frb69\" (UniqueName: \"kubernetes.io/projected/8da2c4cd-0975-4bf2-8a41-d19123d1680f-kube-api-access-frb69\") pod \"community-operators-p9hvb\" (UID: \"8da2c4cd-0975-4bf2-8a41-d19123d1680f\") " pod="openshift-marketplace/community-operators-p9hvb" Jan 23 07:58:08 crc kubenswrapper[4937]: I0123 07:58:08.389131 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da2c4cd-0975-4bf2-8a41-d19123d1680f-catalog-content\") pod \"community-operators-p9hvb\" (UID: \"8da2c4cd-0975-4bf2-8a41-d19123d1680f\") " pod="openshift-marketplace/community-operators-p9hvb" Jan 23 07:58:08 crc kubenswrapper[4937]: I0123 07:58:08.389466 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da2c4cd-0975-4bf2-8a41-d19123d1680f-utilities\") pod \"community-operators-p9hvb\" (UID: \"8da2c4cd-0975-4bf2-8a41-d19123d1680f\") " pod="openshift-marketplace/community-operators-p9hvb" Jan 23 07:58:08 crc kubenswrapper[4937]: I0123 07:58:08.407509 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frb69\" (UniqueName: \"kubernetes.io/projected/8da2c4cd-0975-4bf2-8a41-d19123d1680f-kube-api-access-frb69\") pod \"community-operators-p9hvb\" (UID: \"8da2c4cd-0975-4bf2-8a41-d19123d1680f\") " pod="openshift-marketplace/community-operators-p9hvb" Jan 23 07:58:08 crc kubenswrapper[4937]: I0123 07:58:08.487170 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9hvb" Jan 23 07:58:08 crc kubenswrapper[4937]: I0123 07:58:08.992233 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9hvb"] Jan 23 07:58:09 crc kubenswrapper[4937]: I0123 07:58:09.583516 4937 generic.go:334] "Generic (PLEG): container finished" podID="8da2c4cd-0975-4bf2-8a41-d19123d1680f" containerID="5ee0bf17a5c4ac688e13f2ebd22723fe0c305bd106ea1bb106cf1f8c14262018" exitCode=0 Jan 23 07:58:09 crc kubenswrapper[4937]: I0123 07:58:09.583574 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9hvb" event={"ID":"8da2c4cd-0975-4bf2-8a41-d19123d1680f","Type":"ContainerDied","Data":"5ee0bf17a5c4ac688e13f2ebd22723fe0c305bd106ea1bb106cf1f8c14262018"} Jan 23 07:58:09 crc kubenswrapper[4937]: I0123 07:58:09.584139 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9hvb" event={"ID":"8da2c4cd-0975-4bf2-8a41-d19123d1680f","Type":"ContainerStarted","Data":"cc4b1fbae0379cc71b66ceda4bfbca269622fcb4da403e248c5baa44916a4de5"} Jan 23 07:58:10 crc kubenswrapper[4937]: I0123 07:58:10.594346 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9hvb" event={"ID":"8da2c4cd-0975-4bf2-8a41-d19123d1680f","Type":"ContainerStarted","Data":"6e23568fb46b0bbce99ecd8c7699eee9dca57a3bd04f26fa7cbf6c3fc97c9762"} Jan 23 07:58:11 crc kubenswrapper[4937]: I0123 07:58:11.607389 4937 generic.go:334] "Generic (PLEG): container finished" podID="8da2c4cd-0975-4bf2-8a41-d19123d1680f" containerID="6e23568fb46b0bbce99ecd8c7699eee9dca57a3bd04f26fa7cbf6c3fc97c9762" exitCode=0 Jan 23 07:58:11 crc kubenswrapper[4937]: I0123 07:58:11.607513 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9hvb" event={"ID":"8da2c4cd-0975-4bf2-8a41-d19123d1680f","Type":"ContainerDied","Data":"6e23568fb46b0bbce99ecd8c7699eee9dca57a3bd04f26fa7cbf6c3fc97c9762"} Jan 23 07:58:12 crc kubenswrapper[4937]: I0123 07:58:12.620760 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9hvb" event={"ID":"8da2c4cd-0975-4bf2-8a41-d19123d1680f","Type":"ContainerStarted","Data":"c9a52516dc820663180f0d200ad8e1c2a662fe6931379007ec1b69dff84ce056"} Jan 23 07:58:12 crc kubenswrapper[4937]: I0123 07:58:12.645194 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p9hvb" podStartSLOduration=2.249507602 podStartE2EDuration="4.645179122s" podCreationTimestamp="2026-01-23 07:58:08 +0000 UTC" firstStartedPulling="2026-01-23 07:58:09.585943891 +0000 UTC m=+5089.389710564" lastFinishedPulling="2026-01-23 07:58:11.981615421 +0000 UTC m=+5091.785382084" observedRunningTime="2026-01-23 07:58:12.63995878 +0000 UTC m=+5092.443725433" watchObservedRunningTime="2026-01-23 07:58:12.645179122 +0000 UTC m=+5092.448945775" Jan 23 07:58:18 crc kubenswrapper[4937]: I0123 07:58:18.487827 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p9hvb" Jan 23 07:58:18 crc kubenswrapper[4937]: I0123 07:58:18.488286 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p9hvb" Jan 23 07:58:18 crc kubenswrapper[4937]: I0123 07:58:18.540083 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p9hvb" Jan 23 07:58:18 crc kubenswrapper[4937]: I0123 07:58:18.740745 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p9hvb" Jan 23 07:58:18 crc kubenswrapper[4937]: I0123 07:58:18.794233 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p9hvb"] Jan 23 07:58:20 crc kubenswrapper[4937]: I0123 07:58:20.707538 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p9hvb" podUID="8da2c4cd-0975-4bf2-8a41-d19123d1680f" containerName="registry-server" containerID="cri-o://c9a52516dc820663180f0d200ad8e1c2a662fe6931379007ec1b69dff84ce056" gracePeriod=2 Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.244123 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9hvb" Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.287338 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da2c4cd-0975-4bf2-8a41-d19123d1680f-catalog-content\") pod \"8da2c4cd-0975-4bf2-8a41-d19123d1680f\" (UID: \"8da2c4cd-0975-4bf2-8a41-d19123d1680f\") " Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.287567 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da2c4cd-0975-4bf2-8a41-d19123d1680f-utilities\") pod \"8da2c4cd-0975-4bf2-8a41-d19123d1680f\" (UID: \"8da2c4cd-0975-4bf2-8a41-d19123d1680f\") " Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.287710 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frb69\" (UniqueName: \"kubernetes.io/projected/8da2c4cd-0975-4bf2-8a41-d19123d1680f-kube-api-access-frb69\") pod \"8da2c4cd-0975-4bf2-8a41-d19123d1680f\" (UID: \"8da2c4cd-0975-4bf2-8a41-d19123d1680f\") " Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.288372 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8da2c4cd-0975-4bf2-8a41-d19123d1680f-utilities" (OuterVolumeSpecName: "utilities") pod "8da2c4cd-0975-4bf2-8a41-d19123d1680f" (UID: "8da2c4cd-0975-4bf2-8a41-d19123d1680f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.298900 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8da2c4cd-0975-4bf2-8a41-d19123d1680f-kube-api-access-frb69" (OuterVolumeSpecName: "kube-api-access-frb69") pod "8da2c4cd-0975-4bf2-8a41-d19123d1680f" (UID: "8da2c4cd-0975-4bf2-8a41-d19123d1680f"). InnerVolumeSpecName "kube-api-access-frb69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.340410 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8da2c4cd-0975-4bf2-8a41-d19123d1680f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8da2c4cd-0975-4bf2-8a41-d19123d1680f" (UID: "8da2c4cd-0975-4bf2-8a41-d19123d1680f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.389847 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8da2c4cd-0975-4bf2-8a41-d19123d1680f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.389883 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8da2c4cd-0975-4bf2-8a41-d19123d1680f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.389895 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frb69\" (UniqueName: \"kubernetes.io/projected/8da2c4cd-0975-4bf2-8a41-d19123d1680f-kube-api-access-frb69\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.720363 4937 generic.go:334] "Generic (PLEG): container finished" podID="8da2c4cd-0975-4bf2-8a41-d19123d1680f" containerID="c9a52516dc820663180f0d200ad8e1c2a662fe6931379007ec1b69dff84ce056" exitCode=0 Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.720425 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9hvb" event={"ID":"8da2c4cd-0975-4bf2-8a41-d19123d1680f","Type":"ContainerDied","Data":"c9a52516dc820663180f0d200ad8e1c2a662fe6931379007ec1b69dff84ce056"} Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.720445 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9hvb" Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.720465 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9hvb" event={"ID":"8da2c4cd-0975-4bf2-8a41-d19123d1680f","Type":"ContainerDied","Data":"cc4b1fbae0379cc71b66ceda4bfbca269622fcb4da403e248c5baa44916a4de5"} Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.720500 4937 scope.go:117] "RemoveContainer" containerID="c9a52516dc820663180f0d200ad8e1c2a662fe6931379007ec1b69dff84ce056" Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.756243 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p9hvb"] Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.759635 4937 scope.go:117] "RemoveContainer" containerID="6e23568fb46b0bbce99ecd8c7699eee9dca57a3bd04f26fa7cbf6c3fc97c9762" Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.771641 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p9hvb"] Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.783100 4937 scope.go:117] "RemoveContainer" containerID="5ee0bf17a5c4ac688e13f2ebd22723fe0c305bd106ea1bb106cf1f8c14262018" Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.829957 4937 scope.go:117] "RemoveContainer" containerID="c9a52516dc820663180f0d200ad8e1c2a662fe6931379007ec1b69dff84ce056" Jan 23 07:58:21 crc kubenswrapper[4937]: E0123 07:58:21.833277 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9a52516dc820663180f0d200ad8e1c2a662fe6931379007ec1b69dff84ce056\": container with ID starting with c9a52516dc820663180f0d200ad8e1c2a662fe6931379007ec1b69dff84ce056 not found: ID does not exist" containerID="c9a52516dc820663180f0d200ad8e1c2a662fe6931379007ec1b69dff84ce056" Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.833341 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9a52516dc820663180f0d200ad8e1c2a662fe6931379007ec1b69dff84ce056"} err="failed to get container status \"c9a52516dc820663180f0d200ad8e1c2a662fe6931379007ec1b69dff84ce056\": rpc error: code = NotFound desc = could not find container \"c9a52516dc820663180f0d200ad8e1c2a662fe6931379007ec1b69dff84ce056\": container with ID starting with c9a52516dc820663180f0d200ad8e1c2a662fe6931379007ec1b69dff84ce056 not found: ID does not exist" Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.833369 4937 scope.go:117] "RemoveContainer" containerID="6e23568fb46b0bbce99ecd8c7699eee9dca57a3bd04f26fa7cbf6c3fc97c9762" Jan 23 07:58:21 crc kubenswrapper[4937]: E0123 07:58:21.834011 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e23568fb46b0bbce99ecd8c7699eee9dca57a3bd04f26fa7cbf6c3fc97c9762\": container with ID starting with 6e23568fb46b0bbce99ecd8c7699eee9dca57a3bd04f26fa7cbf6c3fc97c9762 not found: ID does not exist" containerID="6e23568fb46b0bbce99ecd8c7699eee9dca57a3bd04f26fa7cbf6c3fc97c9762" Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.834062 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e23568fb46b0bbce99ecd8c7699eee9dca57a3bd04f26fa7cbf6c3fc97c9762"} err="failed to get container status \"6e23568fb46b0bbce99ecd8c7699eee9dca57a3bd04f26fa7cbf6c3fc97c9762\": rpc error: code = NotFound desc = could not find container \"6e23568fb46b0bbce99ecd8c7699eee9dca57a3bd04f26fa7cbf6c3fc97c9762\": container with ID starting with 6e23568fb46b0bbce99ecd8c7699eee9dca57a3bd04f26fa7cbf6c3fc97c9762 not found: ID does not exist" Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.834077 4937 scope.go:117] "RemoveContainer" containerID="5ee0bf17a5c4ac688e13f2ebd22723fe0c305bd106ea1bb106cf1f8c14262018" Jan 23 07:58:21 crc kubenswrapper[4937]: E0123 07:58:21.834427 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ee0bf17a5c4ac688e13f2ebd22723fe0c305bd106ea1bb106cf1f8c14262018\": container with ID starting with 5ee0bf17a5c4ac688e13f2ebd22723fe0c305bd106ea1bb106cf1f8c14262018 not found: ID does not exist" containerID="5ee0bf17a5c4ac688e13f2ebd22723fe0c305bd106ea1bb106cf1f8c14262018" Jan 23 07:58:21 crc kubenswrapper[4937]: I0123 07:58:21.834467 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ee0bf17a5c4ac688e13f2ebd22723fe0c305bd106ea1bb106cf1f8c14262018"} err="failed to get container status \"5ee0bf17a5c4ac688e13f2ebd22723fe0c305bd106ea1bb106cf1f8c14262018\": rpc error: code = NotFound desc = could not find container \"5ee0bf17a5c4ac688e13f2ebd22723fe0c305bd106ea1bb106cf1f8c14262018\": container with ID starting with 5ee0bf17a5c4ac688e13f2ebd22723fe0c305bd106ea1bb106cf1f8c14262018 not found: ID does not exist" Jan 23 07:58:22 crc kubenswrapper[4937]: I0123 07:58:22.539407 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8da2c4cd-0975-4bf2-8a41-d19123d1680f" path="/var/lib/kubelet/pods/8da2c4cd-0975-4bf2-8a41-d19123d1680f/volumes" Jan 23 07:58:37 crc kubenswrapper[4937]: I0123 07:58:37.723980 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:58:37 crc kubenswrapper[4937]: I0123 07:58:37.724527 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:58:37 crc kubenswrapper[4937]: I0123 07:58:37.724579 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 07:58:37 crc kubenswrapper[4937]: I0123 07:58:37.725405 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2750c8793978e6751989bf1064df680550e4c785e14b6869bda1eb315f35a26e"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:58:37 crc kubenswrapper[4937]: I0123 07:58:37.725463 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://2750c8793978e6751989bf1064df680550e4c785e14b6869bda1eb315f35a26e" gracePeriod=600 Jan 23 07:58:37 crc kubenswrapper[4937]: I0123 07:58:37.893726 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="2750c8793978e6751989bf1064df680550e4c785e14b6869bda1eb315f35a26e" exitCode=0 Jan 23 07:58:37 crc kubenswrapper[4937]: I0123 07:58:37.893774 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"2750c8793978e6751989bf1064df680550e4c785e14b6869bda1eb315f35a26e"} Jan 23 07:58:37 crc kubenswrapper[4937]: I0123 07:58:37.893807 4937 scope.go:117] "RemoveContainer" containerID="cee2e495c1b710b7e98030e2687272b0a257f1d865897424a886e66abe4e885a" Jan 23 07:58:38 crc kubenswrapper[4937]: I0123 07:58:38.904731 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae"} Jan 23 08:00:00 crc kubenswrapper[4937]: I0123 08:00:00.179662 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485920-hwqj6"] Jan 23 08:00:00 crc kubenswrapper[4937]: E0123 08:00:00.180818 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da2c4cd-0975-4bf2-8a41-d19123d1680f" containerName="extract-utilities" Jan 23 08:00:00 crc kubenswrapper[4937]: I0123 08:00:00.180839 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da2c4cd-0975-4bf2-8a41-d19123d1680f" containerName="extract-utilities" Jan 23 08:00:00 crc kubenswrapper[4937]: E0123 08:00:00.180892 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da2c4cd-0975-4bf2-8a41-d19123d1680f" containerName="registry-server" Jan 23 08:00:00 crc kubenswrapper[4937]: I0123 08:00:00.180903 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da2c4cd-0975-4bf2-8a41-d19123d1680f" containerName="registry-server" Jan 23 08:00:00 crc kubenswrapper[4937]: E0123 08:00:00.180927 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da2c4cd-0975-4bf2-8a41-d19123d1680f" containerName="extract-content" Jan 23 08:00:00 crc kubenswrapper[4937]: I0123 08:00:00.180936 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da2c4cd-0975-4bf2-8a41-d19123d1680f" containerName="extract-content" Jan 23 08:00:00 crc kubenswrapper[4937]: I0123 08:00:00.181154 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="8da2c4cd-0975-4bf2-8a41-d19123d1680f" containerName="registry-server" Jan 23 08:00:00 crc kubenswrapper[4937]: I0123 08:00:00.182030 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-hwqj6" Jan 23 08:00:00 crc kubenswrapper[4937]: I0123 08:00:00.187199 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 08:00:00 crc kubenswrapper[4937]: I0123 08:00:00.188668 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 08:00:00 crc kubenswrapper[4937]: I0123 08:00:00.191458 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485920-hwqj6"] Jan 23 08:00:00 crc kubenswrapper[4937]: I0123 08:00:00.250049 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxs9g\" (UniqueName: \"kubernetes.io/projected/52fc1c96-b647-4e13-9872-d384825446c9-kube-api-access-cxs9g\") pod \"collect-profiles-29485920-hwqj6\" (UID: \"52fc1c96-b647-4e13-9872-d384825446c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-hwqj6" Jan 23 08:00:00 crc kubenswrapper[4937]: I0123 08:00:00.250103 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52fc1c96-b647-4e13-9872-d384825446c9-config-volume\") pod \"collect-profiles-29485920-hwqj6\" (UID: \"52fc1c96-b647-4e13-9872-d384825446c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-hwqj6" Jan 23 08:00:00 crc kubenswrapper[4937]: I0123 08:00:00.250161 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52fc1c96-b647-4e13-9872-d384825446c9-secret-volume\") pod \"collect-profiles-29485920-hwqj6\" (UID: \"52fc1c96-b647-4e13-9872-d384825446c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-hwqj6" Jan 23 08:00:00 crc kubenswrapper[4937]: I0123 08:00:00.352075 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxs9g\" (UniqueName: \"kubernetes.io/projected/52fc1c96-b647-4e13-9872-d384825446c9-kube-api-access-cxs9g\") pod \"collect-profiles-29485920-hwqj6\" (UID: \"52fc1c96-b647-4e13-9872-d384825446c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-hwqj6" Jan 23 08:00:00 crc kubenswrapper[4937]: I0123 08:00:00.352351 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52fc1c96-b647-4e13-9872-d384825446c9-config-volume\") pod \"collect-profiles-29485920-hwqj6\" (UID: \"52fc1c96-b647-4e13-9872-d384825446c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-hwqj6" Jan 23 08:00:00 crc kubenswrapper[4937]: I0123 08:00:00.352480 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52fc1c96-b647-4e13-9872-d384825446c9-secret-volume\") pod \"collect-profiles-29485920-hwqj6\" (UID: \"52fc1c96-b647-4e13-9872-d384825446c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-hwqj6" Jan 23 08:00:00 crc kubenswrapper[4937]: I0123 08:00:00.353413 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52fc1c96-b647-4e13-9872-d384825446c9-config-volume\") pod \"collect-profiles-29485920-hwqj6\" (UID: \"52fc1c96-b647-4e13-9872-d384825446c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-hwqj6" Jan 23 08:00:00 crc kubenswrapper[4937]: I0123 08:00:00.466487 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52fc1c96-b647-4e13-9872-d384825446c9-secret-volume\") pod \"collect-profiles-29485920-hwqj6\" (UID: \"52fc1c96-b647-4e13-9872-d384825446c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-hwqj6" Jan 23 08:00:00 crc kubenswrapper[4937]: I0123 08:00:00.466541 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxs9g\" (UniqueName: \"kubernetes.io/projected/52fc1c96-b647-4e13-9872-d384825446c9-kube-api-access-cxs9g\") pod \"collect-profiles-29485920-hwqj6\" (UID: \"52fc1c96-b647-4e13-9872-d384825446c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-hwqj6" Jan 23 08:00:00 crc kubenswrapper[4937]: I0123 08:00:00.528194 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-hwqj6" Jan 23 08:00:01 crc kubenswrapper[4937]: I0123 08:00:01.026326 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485920-hwqj6"] Jan 23 08:00:01 crc kubenswrapper[4937]: I0123 08:00:01.580230 4937 generic.go:334] "Generic (PLEG): container finished" podID="52fc1c96-b647-4e13-9872-d384825446c9" containerID="a64d5129fce07238117013e99432ef1166d2f6b747ff7921a11c6cc311ef9cd9" exitCode=0 Jan 23 08:00:01 crc kubenswrapper[4937]: I0123 08:00:01.580297 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-hwqj6" event={"ID":"52fc1c96-b647-4e13-9872-d384825446c9","Type":"ContainerDied","Data":"a64d5129fce07238117013e99432ef1166d2f6b747ff7921a11c6cc311ef9cd9"} Jan 23 08:00:01 crc kubenswrapper[4937]: I0123 08:00:01.580356 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-hwqj6" event={"ID":"52fc1c96-b647-4e13-9872-d384825446c9","Type":"ContainerStarted","Data":"2512fde197aed37a7304b80d94b3d6d9c12cc4997295ffdb3e21647783c7d99a"} Jan 23 08:00:02 crc kubenswrapper[4937]: I0123 08:00:02.955495 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-hwqj6" Jan 23 08:00:03 crc kubenswrapper[4937]: I0123 08:00:03.004612 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52fc1c96-b647-4e13-9872-d384825446c9-secret-volume\") pod \"52fc1c96-b647-4e13-9872-d384825446c9\" (UID: \"52fc1c96-b647-4e13-9872-d384825446c9\") " Jan 23 08:00:03 crc kubenswrapper[4937]: I0123 08:00:03.004740 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52fc1c96-b647-4e13-9872-d384825446c9-config-volume\") pod \"52fc1c96-b647-4e13-9872-d384825446c9\" (UID: \"52fc1c96-b647-4e13-9872-d384825446c9\") " Jan 23 08:00:03 crc kubenswrapper[4937]: I0123 08:00:03.004906 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxs9g\" (UniqueName: \"kubernetes.io/projected/52fc1c96-b647-4e13-9872-d384825446c9-kube-api-access-cxs9g\") pod \"52fc1c96-b647-4e13-9872-d384825446c9\" (UID: \"52fc1c96-b647-4e13-9872-d384825446c9\") " Jan 23 08:00:03 crc kubenswrapper[4937]: I0123 08:00:03.008817 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52fc1c96-b647-4e13-9872-d384825446c9-config-volume" (OuterVolumeSpecName: "config-volume") pod "52fc1c96-b647-4e13-9872-d384825446c9" (UID: "52fc1c96-b647-4e13-9872-d384825446c9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:00:03 crc kubenswrapper[4937]: I0123 08:00:03.016867 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52fc1c96-b647-4e13-9872-d384825446c9-kube-api-access-cxs9g" (OuterVolumeSpecName: "kube-api-access-cxs9g") pod "52fc1c96-b647-4e13-9872-d384825446c9" (UID: "52fc1c96-b647-4e13-9872-d384825446c9"). InnerVolumeSpecName "kube-api-access-cxs9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:00:03 crc kubenswrapper[4937]: I0123 08:00:03.024962 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52fc1c96-b647-4e13-9872-d384825446c9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "52fc1c96-b647-4e13-9872-d384825446c9" (UID: "52fc1c96-b647-4e13-9872-d384825446c9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:00:03 crc kubenswrapper[4937]: I0123 08:00:03.106850 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxs9g\" (UniqueName: \"kubernetes.io/projected/52fc1c96-b647-4e13-9872-d384825446c9-kube-api-access-cxs9g\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:03 crc kubenswrapper[4937]: I0123 08:00:03.106889 4937 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52fc1c96-b647-4e13-9872-d384825446c9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:03 crc kubenswrapper[4937]: I0123 08:00:03.106902 4937 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52fc1c96-b647-4e13-9872-d384825446c9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:03 crc kubenswrapper[4937]: I0123 08:00:03.600711 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-hwqj6" event={"ID":"52fc1c96-b647-4e13-9872-d384825446c9","Type":"ContainerDied","Data":"2512fde197aed37a7304b80d94b3d6d9c12cc4997295ffdb3e21647783c7d99a"} Jan 23 08:00:03 crc kubenswrapper[4937]: I0123 08:00:03.600748 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-hwqj6" Jan 23 08:00:03 crc kubenswrapper[4937]: I0123 08:00:03.600778 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2512fde197aed37a7304b80d94b3d6d9c12cc4997295ffdb3e21647783c7d99a" Jan 23 08:00:04 crc kubenswrapper[4937]: I0123 08:00:04.027443 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx"] Jan 23 08:00:04 crc kubenswrapper[4937]: I0123 08:00:04.037789 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485875-r8zfx"] Jan 23 08:00:04 crc kubenswrapper[4937]: I0123 08:00:04.545890 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48b870fb-925f-4021-9ada-8977fd5b9d9c" path="/var/lib/kubelet/pods/48b870fb-925f-4021-9ada-8977fd5b9d9c/volumes" Jan 23 08:00:09 crc kubenswrapper[4937]: I0123 08:00:09.680259 4937 scope.go:117] "RemoveContainer" containerID="450b7facb7cb8dd4e313fce3c2e0777efe4148078eb5b7c8d40bd30e12085368" Jan 23 08:00:19 crc kubenswrapper[4937]: I0123 08:00:19.147922 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c7v9r"] Jan 23 08:00:19 crc kubenswrapper[4937]: E0123 08:00:19.149473 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52fc1c96-b647-4e13-9872-d384825446c9" containerName="collect-profiles" Jan 23 08:00:19 crc kubenswrapper[4937]: I0123 08:00:19.149494 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="52fc1c96-b647-4e13-9872-d384825446c9" containerName="collect-profiles" Jan 23 08:00:19 crc kubenswrapper[4937]: I0123 08:00:19.151989 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="52fc1c96-b647-4e13-9872-d384825446c9" containerName="collect-profiles" Jan 23 08:00:19 crc kubenswrapper[4937]: I0123 08:00:19.154822 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c7v9r" Jan 23 08:00:19 crc kubenswrapper[4937]: I0123 08:00:19.165117 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c7v9r"] Jan 23 08:00:19 crc kubenswrapper[4937]: I0123 08:00:19.257370 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcd8824-9744-4374-a6c7-7a5eca003174-catalog-content\") pod \"redhat-operators-c7v9r\" (UID: \"0bcd8824-9744-4374-a6c7-7a5eca003174\") " pod="openshift-marketplace/redhat-operators-c7v9r" Jan 23 08:00:19 crc kubenswrapper[4937]: I0123 08:00:19.258215 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ll9p\" (UniqueName: \"kubernetes.io/projected/0bcd8824-9744-4374-a6c7-7a5eca003174-kube-api-access-2ll9p\") pod \"redhat-operators-c7v9r\" (UID: \"0bcd8824-9744-4374-a6c7-7a5eca003174\") " pod="openshift-marketplace/redhat-operators-c7v9r" Jan 23 08:00:19 crc kubenswrapper[4937]: I0123 08:00:19.258274 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcd8824-9744-4374-a6c7-7a5eca003174-utilities\") pod \"redhat-operators-c7v9r\" (UID: \"0bcd8824-9744-4374-a6c7-7a5eca003174\") " pod="openshift-marketplace/redhat-operators-c7v9r" Jan 23 08:00:19 crc kubenswrapper[4937]: I0123 08:00:19.360392 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcd8824-9744-4374-a6c7-7a5eca003174-catalog-content\") pod \"redhat-operators-c7v9r\" (UID: \"0bcd8824-9744-4374-a6c7-7a5eca003174\") " pod="openshift-marketplace/redhat-operators-c7v9r" Jan 23 08:00:19 crc kubenswrapper[4937]: I0123 08:00:19.360494 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ll9p\" (UniqueName: \"kubernetes.io/projected/0bcd8824-9744-4374-a6c7-7a5eca003174-kube-api-access-2ll9p\") pod \"redhat-operators-c7v9r\" (UID: \"0bcd8824-9744-4374-a6c7-7a5eca003174\") " pod="openshift-marketplace/redhat-operators-c7v9r" Jan 23 08:00:19 crc kubenswrapper[4937]: I0123 08:00:19.360520 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcd8824-9744-4374-a6c7-7a5eca003174-utilities\") pod \"redhat-operators-c7v9r\" (UID: \"0bcd8824-9744-4374-a6c7-7a5eca003174\") " pod="openshift-marketplace/redhat-operators-c7v9r" Jan 23 08:00:19 crc kubenswrapper[4937]: I0123 08:00:19.361244 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcd8824-9744-4374-a6c7-7a5eca003174-utilities\") pod \"redhat-operators-c7v9r\" (UID: \"0bcd8824-9744-4374-a6c7-7a5eca003174\") " pod="openshift-marketplace/redhat-operators-c7v9r" Jan 23 08:00:19 crc kubenswrapper[4937]: I0123 08:00:19.361202 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcd8824-9744-4374-a6c7-7a5eca003174-catalog-content\") pod \"redhat-operators-c7v9r\" (UID: \"0bcd8824-9744-4374-a6c7-7a5eca003174\") " pod="openshift-marketplace/redhat-operators-c7v9r" Jan 23 08:00:19 crc kubenswrapper[4937]: I0123 08:00:19.382249 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ll9p\" (UniqueName: \"kubernetes.io/projected/0bcd8824-9744-4374-a6c7-7a5eca003174-kube-api-access-2ll9p\") pod \"redhat-operators-c7v9r\" (UID: \"0bcd8824-9744-4374-a6c7-7a5eca003174\") " pod="openshift-marketplace/redhat-operators-c7v9r" Jan 23 08:00:19 crc kubenswrapper[4937]: I0123 08:00:19.480824 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c7v9r" Jan 23 08:00:19 crc kubenswrapper[4937]: I0123 08:00:19.942914 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c7v9r"] Jan 23 08:00:20 crc kubenswrapper[4937]: I0123 08:00:20.764754 4937 generic.go:334] "Generic (PLEG): container finished" podID="0bcd8824-9744-4374-a6c7-7a5eca003174" containerID="a486f4911ced38ec24d0b218399eeb97928359f6cc66e9048d17309f0df4bf14" exitCode=0 Jan 23 08:00:20 crc kubenswrapper[4937]: I0123 08:00:20.764880 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7v9r" event={"ID":"0bcd8824-9744-4374-a6c7-7a5eca003174","Type":"ContainerDied","Data":"a486f4911ced38ec24d0b218399eeb97928359f6cc66e9048d17309f0df4bf14"} Jan 23 08:00:20 crc kubenswrapper[4937]: I0123 08:00:20.765262 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7v9r" event={"ID":"0bcd8824-9744-4374-a6c7-7a5eca003174","Type":"ContainerStarted","Data":"fe973b3913758c6096fbe23f723dae3acd0e4e63278fc3285579f48d695135a5"} Jan 23 08:00:20 crc kubenswrapper[4937]: I0123 08:00:20.767274 4937 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:00:24 crc kubenswrapper[4937]: I0123 08:00:24.813957 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7v9r" event={"ID":"0bcd8824-9744-4374-a6c7-7a5eca003174","Type":"ContainerStarted","Data":"1db93d888a1e87217bb09dfbf779082ea71da3e90483950fd64cabcbe668be4d"} Jan 23 08:00:28 crc kubenswrapper[4937]: I0123 08:00:28.855055 4937 generic.go:334] "Generic (PLEG): container finished" podID="0bcd8824-9744-4374-a6c7-7a5eca003174" containerID="1db93d888a1e87217bb09dfbf779082ea71da3e90483950fd64cabcbe668be4d" exitCode=0 Jan 23 08:00:28 crc kubenswrapper[4937]: I0123 08:00:28.855140 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7v9r" event={"ID":"0bcd8824-9744-4374-a6c7-7a5eca003174","Type":"ContainerDied","Data":"1db93d888a1e87217bb09dfbf779082ea71da3e90483950fd64cabcbe668be4d"} Jan 23 08:00:29 crc kubenswrapper[4937]: I0123 08:00:29.882923 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7v9r" event={"ID":"0bcd8824-9744-4374-a6c7-7a5eca003174","Type":"ContainerStarted","Data":"3e7b45b72013d10ff3ac52f826bd5eba49cd8f712f58d27657c91a4ef88a5af0"} Jan 23 08:00:29 crc kubenswrapper[4937]: I0123 08:00:29.912199 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c7v9r" podStartSLOduration=2.369325178 podStartE2EDuration="10.912177521s" podCreationTimestamp="2026-01-23 08:00:19 +0000 UTC" firstStartedPulling="2026-01-23 08:00:20.76698612 +0000 UTC m=+5220.570752783" lastFinishedPulling="2026-01-23 08:00:29.309838473 +0000 UTC m=+5229.113605126" observedRunningTime="2026-01-23 08:00:29.902487528 +0000 UTC m=+5229.706254201" watchObservedRunningTime="2026-01-23 08:00:29.912177521 +0000 UTC m=+5229.715944174" Jan 23 08:00:39 crc kubenswrapper[4937]: I0123 08:00:39.481784 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c7v9r" Jan 23 08:00:39 crc kubenswrapper[4937]: I0123 08:00:39.482270 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c7v9r" Jan 23 08:00:39 crc kubenswrapper[4937]: I0123 08:00:39.921805 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c7v9r" Jan 23 08:00:40 crc kubenswrapper[4937]: I0123 08:00:40.039829 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c7v9r" Jan 23 08:00:40 crc kubenswrapper[4937]: I0123 08:00:40.633864 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tr5qr"] Jan 23 08:00:40 crc kubenswrapper[4937]: I0123 08:00:40.636487 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tr5qr" Jan 23 08:00:40 crc kubenswrapper[4937]: I0123 08:00:40.663014 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tr5qr"] Jan 23 08:00:40 crc kubenswrapper[4937]: I0123 08:00:40.673400 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527d789c-8d6b-445e-aa5c-67ecfa924de7-catalog-content\") pod \"certified-operators-tr5qr\" (UID: \"527d789c-8d6b-445e-aa5c-67ecfa924de7\") " pod="openshift-marketplace/certified-operators-tr5qr" Jan 23 08:00:40 crc kubenswrapper[4937]: I0123 08:00:40.673495 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527d789c-8d6b-445e-aa5c-67ecfa924de7-utilities\") pod \"certified-operators-tr5qr\" (UID: \"527d789c-8d6b-445e-aa5c-67ecfa924de7\") " pod="openshift-marketplace/certified-operators-tr5qr" Jan 23 08:00:40 crc kubenswrapper[4937]: I0123 08:00:40.673538 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m77m\" (UniqueName: \"kubernetes.io/projected/527d789c-8d6b-445e-aa5c-67ecfa924de7-kube-api-access-2m77m\") pod \"certified-operators-tr5qr\" (UID: \"527d789c-8d6b-445e-aa5c-67ecfa924de7\") " pod="openshift-marketplace/certified-operators-tr5qr" Jan 23 08:00:40 crc kubenswrapper[4937]: I0123 08:00:40.775693 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527d789c-8d6b-445e-aa5c-67ecfa924de7-catalog-content\") pod \"certified-operators-tr5qr\" (UID: \"527d789c-8d6b-445e-aa5c-67ecfa924de7\") " pod="openshift-marketplace/certified-operators-tr5qr" Jan 23 08:00:40 crc kubenswrapper[4937]: I0123 08:00:40.776069 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527d789c-8d6b-445e-aa5c-67ecfa924de7-utilities\") pod \"certified-operators-tr5qr\" (UID: \"527d789c-8d6b-445e-aa5c-67ecfa924de7\") " pod="openshift-marketplace/certified-operators-tr5qr" Jan 23 08:00:40 crc kubenswrapper[4937]: I0123 08:00:40.776198 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m77m\" (UniqueName: \"kubernetes.io/projected/527d789c-8d6b-445e-aa5c-67ecfa924de7-kube-api-access-2m77m\") pod \"certified-operators-tr5qr\" (UID: \"527d789c-8d6b-445e-aa5c-67ecfa924de7\") " pod="openshift-marketplace/certified-operators-tr5qr" Jan 23 08:00:40 crc kubenswrapper[4937]: I0123 08:00:40.776506 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527d789c-8d6b-445e-aa5c-67ecfa924de7-catalog-content\") pod \"certified-operators-tr5qr\" (UID: \"527d789c-8d6b-445e-aa5c-67ecfa924de7\") " pod="openshift-marketplace/certified-operators-tr5qr" Jan 23 08:00:40 crc kubenswrapper[4937]: I0123 08:00:40.776550 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527d789c-8d6b-445e-aa5c-67ecfa924de7-utilities\") pod \"certified-operators-tr5qr\" (UID: \"527d789c-8d6b-445e-aa5c-67ecfa924de7\") " pod="openshift-marketplace/certified-operators-tr5qr" Jan 23 08:00:40 crc kubenswrapper[4937]: I0123 08:00:40.796264 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m77m\" (UniqueName: \"kubernetes.io/projected/527d789c-8d6b-445e-aa5c-67ecfa924de7-kube-api-access-2m77m\") pod \"certified-operators-tr5qr\" (UID: \"527d789c-8d6b-445e-aa5c-67ecfa924de7\") " pod="openshift-marketplace/certified-operators-tr5qr" Jan 23 08:00:40 crc kubenswrapper[4937]: I0123 08:00:40.963320 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tr5qr" Jan 23 08:00:41 crc kubenswrapper[4937]: I0123 08:00:41.031172 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c7v9r"] Jan 23 08:00:41 crc kubenswrapper[4937]: I0123 08:00:41.551167 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tr5qr"] Jan 23 08:00:41 crc kubenswrapper[4937]: I0123 08:00:41.993006 4937 generic.go:334] "Generic (PLEG): container finished" podID="527d789c-8d6b-445e-aa5c-67ecfa924de7" containerID="159298cafad57a362f84e975b0ef8d5407caebc374aed500e65855e487e3ba35" exitCode=0 Jan 23 08:00:41 crc kubenswrapper[4937]: I0123 08:00:41.993096 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tr5qr" event={"ID":"527d789c-8d6b-445e-aa5c-67ecfa924de7","Type":"ContainerDied","Data":"159298cafad57a362f84e975b0ef8d5407caebc374aed500e65855e487e3ba35"} Jan 23 08:00:41 crc kubenswrapper[4937]: I0123 08:00:41.993479 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tr5qr" event={"ID":"527d789c-8d6b-445e-aa5c-67ecfa924de7","Type":"ContainerStarted","Data":"5ea91e8ae1f532077c97c5eef9d2fd0015bf2ea4800b76a22e41a0f3797a9c7b"} Jan 23 08:00:41 crc kubenswrapper[4937]: I0123 08:00:41.993510 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c7v9r" podUID="0bcd8824-9744-4374-a6c7-7a5eca003174" containerName="registry-server" containerID="cri-o://3e7b45b72013d10ff3ac52f826bd5eba49cd8f712f58d27657c91a4ef88a5af0" gracePeriod=2 Jan 23 08:00:42 crc kubenswrapper[4937]: I0123 08:00:42.604902 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c7v9r" Jan 23 08:00:42 crc kubenswrapper[4937]: I0123 08:00:42.723144 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcd8824-9744-4374-a6c7-7a5eca003174-utilities\") pod \"0bcd8824-9744-4374-a6c7-7a5eca003174\" (UID: \"0bcd8824-9744-4374-a6c7-7a5eca003174\") " Jan 23 08:00:42 crc kubenswrapper[4937]: I0123 08:00:42.723207 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcd8824-9744-4374-a6c7-7a5eca003174-catalog-content\") pod \"0bcd8824-9744-4374-a6c7-7a5eca003174\" (UID: \"0bcd8824-9744-4374-a6c7-7a5eca003174\") " Jan 23 08:00:42 crc kubenswrapper[4937]: I0123 08:00:42.723398 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ll9p\" (UniqueName: \"kubernetes.io/projected/0bcd8824-9744-4374-a6c7-7a5eca003174-kube-api-access-2ll9p\") pod \"0bcd8824-9744-4374-a6c7-7a5eca003174\" (UID: \"0bcd8824-9744-4374-a6c7-7a5eca003174\") " Jan 23 08:00:42 crc kubenswrapper[4937]: I0123 08:00:42.723886 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bcd8824-9744-4374-a6c7-7a5eca003174-utilities" (OuterVolumeSpecName: "utilities") pod "0bcd8824-9744-4374-a6c7-7a5eca003174" (UID: "0bcd8824-9744-4374-a6c7-7a5eca003174"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:00:42 crc kubenswrapper[4937]: I0123 08:00:42.733886 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bcd8824-9744-4374-a6c7-7a5eca003174-kube-api-access-2ll9p" (OuterVolumeSpecName: "kube-api-access-2ll9p") pod "0bcd8824-9744-4374-a6c7-7a5eca003174" (UID: "0bcd8824-9744-4374-a6c7-7a5eca003174"). InnerVolumeSpecName "kube-api-access-2ll9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:00:42 crc kubenswrapper[4937]: I0123 08:00:42.826370 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ll9p\" (UniqueName: \"kubernetes.io/projected/0bcd8824-9744-4374-a6c7-7a5eca003174-kube-api-access-2ll9p\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:42 crc kubenswrapper[4937]: I0123 08:00:42.826618 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcd8824-9744-4374-a6c7-7a5eca003174-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:42 crc kubenswrapper[4937]: I0123 08:00:42.858087 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bcd8824-9744-4374-a6c7-7a5eca003174-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bcd8824-9744-4374-a6c7-7a5eca003174" (UID: "0bcd8824-9744-4374-a6c7-7a5eca003174"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:00:42 crc kubenswrapper[4937]: I0123 08:00:42.929786 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcd8824-9744-4374-a6c7-7a5eca003174-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:43 crc kubenswrapper[4937]: I0123 08:00:43.003193 4937 generic.go:334] "Generic (PLEG): container finished" podID="0bcd8824-9744-4374-a6c7-7a5eca003174" containerID="3e7b45b72013d10ff3ac52f826bd5eba49cd8f712f58d27657c91a4ef88a5af0" exitCode=0 Jan 23 08:00:43 crc kubenswrapper[4937]: I0123 08:00:43.003239 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7v9r" event={"ID":"0bcd8824-9744-4374-a6c7-7a5eca003174","Type":"ContainerDied","Data":"3e7b45b72013d10ff3ac52f826bd5eba49cd8f712f58d27657c91a4ef88a5af0"} Jan 23 08:00:43 crc kubenswrapper[4937]: I0123 08:00:43.003277 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c7v9r" event={"ID":"0bcd8824-9744-4374-a6c7-7a5eca003174","Type":"ContainerDied","Data":"fe973b3913758c6096fbe23f723dae3acd0e4e63278fc3285579f48d695135a5"} Jan 23 08:00:43 crc kubenswrapper[4937]: I0123 08:00:43.003282 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c7v9r" Jan 23 08:00:43 crc kubenswrapper[4937]: I0123 08:00:43.003296 4937 scope.go:117] "RemoveContainer" containerID="3e7b45b72013d10ff3ac52f826bd5eba49cd8f712f58d27657c91a4ef88a5af0" Jan 23 08:00:43 crc kubenswrapper[4937]: I0123 08:00:43.050117 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c7v9r"] Jan 23 08:00:43 crc kubenswrapper[4937]: I0123 08:00:43.056966 4937 scope.go:117] "RemoveContainer" containerID="1db93d888a1e87217bb09dfbf779082ea71da3e90483950fd64cabcbe668be4d" Jan 23 08:00:43 crc kubenswrapper[4937]: I0123 08:00:43.060566 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c7v9r"] Jan 23 08:00:43 crc kubenswrapper[4937]: I0123 08:00:43.099425 4937 scope.go:117] "RemoveContainer" containerID="a486f4911ced38ec24d0b218399eeb97928359f6cc66e9048d17309f0df4bf14" Jan 23 08:00:43 crc kubenswrapper[4937]: I0123 08:00:43.157534 4937 scope.go:117] "RemoveContainer" containerID="3e7b45b72013d10ff3ac52f826bd5eba49cd8f712f58d27657c91a4ef88a5af0" Jan 23 08:00:43 crc kubenswrapper[4937]: E0123 08:00:43.158214 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e7b45b72013d10ff3ac52f826bd5eba49cd8f712f58d27657c91a4ef88a5af0\": container with ID starting with 3e7b45b72013d10ff3ac52f826bd5eba49cd8f712f58d27657c91a4ef88a5af0 not found: ID does not exist" containerID="3e7b45b72013d10ff3ac52f826bd5eba49cd8f712f58d27657c91a4ef88a5af0" Jan 23 08:00:43 crc kubenswrapper[4937]: I0123 08:00:43.158250 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7b45b72013d10ff3ac52f826bd5eba49cd8f712f58d27657c91a4ef88a5af0"} err="failed to get container status \"3e7b45b72013d10ff3ac52f826bd5eba49cd8f712f58d27657c91a4ef88a5af0\": rpc error: code = NotFound desc = could not find container \"3e7b45b72013d10ff3ac52f826bd5eba49cd8f712f58d27657c91a4ef88a5af0\": container with ID starting with 3e7b45b72013d10ff3ac52f826bd5eba49cd8f712f58d27657c91a4ef88a5af0 not found: ID does not exist" Jan 23 08:00:43 crc kubenswrapper[4937]: I0123 08:00:43.158273 4937 scope.go:117] "RemoveContainer" containerID="1db93d888a1e87217bb09dfbf779082ea71da3e90483950fd64cabcbe668be4d" Jan 23 08:00:43 crc kubenswrapper[4937]: E0123 08:00:43.158631 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1db93d888a1e87217bb09dfbf779082ea71da3e90483950fd64cabcbe668be4d\": container with ID starting with 1db93d888a1e87217bb09dfbf779082ea71da3e90483950fd64cabcbe668be4d not found: ID does not exist" containerID="1db93d888a1e87217bb09dfbf779082ea71da3e90483950fd64cabcbe668be4d" Jan 23 08:00:43 crc kubenswrapper[4937]: I0123 08:00:43.158655 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db93d888a1e87217bb09dfbf779082ea71da3e90483950fd64cabcbe668be4d"} err="failed to get container status \"1db93d888a1e87217bb09dfbf779082ea71da3e90483950fd64cabcbe668be4d\": rpc error: code = NotFound desc = could not find container \"1db93d888a1e87217bb09dfbf779082ea71da3e90483950fd64cabcbe668be4d\": container with ID starting with 1db93d888a1e87217bb09dfbf779082ea71da3e90483950fd64cabcbe668be4d not found: ID does not exist" Jan 23 08:00:43 crc kubenswrapper[4937]: I0123 08:00:43.158672 4937 scope.go:117] "RemoveContainer" containerID="a486f4911ced38ec24d0b218399eeb97928359f6cc66e9048d17309f0df4bf14" Jan 23 08:00:43 crc kubenswrapper[4937]: E0123 08:00:43.159017 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a486f4911ced38ec24d0b218399eeb97928359f6cc66e9048d17309f0df4bf14\": container with ID starting with a486f4911ced38ec24d0b218399eeb97928359f6cc66e9048d17309f0df4bf14 not found: ID does not exist" containerID="a486f4911ced38ec24d0b218399eeb97928359f6cc66e9048d17309f0df4bf14" Jan 23 08:00:43 crc kubenswrapper[4937]: I0123 08:00:43.159079 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a486f4911ced38ec24d0b218399eeb97928359f6cc66e9048d17309f0df4bf14"} err="failed to get container status \"a486f4911ced38ec24d0b218399eeb97928359f6cc66e9048d17309f0df4bf14\": rpc error: code = NotFound desc = could not find container \"a486f4911ced38ec24d0b218399eeb97928359f6cc66e9048d17309f0df4bf14\": container with ID starting with a486f4911ced38ec24d0b218399eeb97928359f6cc66e9048d17309f0df4bf14 not found: ID does not exist" Jan 23 08:00:44 crc kubenswrapper[4937]: I0123 08:00:44.016752 4937 generic.go:334] "Generic (PLEG): container finished" podID="527d789c-8d6b-445e-aa5c-67ecfa924de7" containerID="f79ff47b21850c1ee6636fd446afd02fde1cbcd2770a9cfab7afeae6221f697f" exitCode=0 Jan 23 08:00:44 crc kubenswrapper[4937]: I0123 08:00:44.016853 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tr5qr" event={"ID":"527d789c-8d6b-445e-aa5c-67ecfa924de7","Type":"ContainerDied","Data":"f79ff47b21850c1ee6636fd446afd02fde1cbcd2770a9cfab7afeae6221f697f"} Jan 23 08:00:44 crc kubenswrapper[4937]: I0123 08:00:44.541947 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bcd8824-9744-4374-a6c7-7a5eca003174" path="/var/lib/kubelet/pods/0bcd8824-9744-4374-a6c7-7a5eca003174/volumes" Jan 23 08:00:45 crc kubenswrapper[4937]: I0123 08:00:45.031256 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tr5qr" event={"ID":"527d789c-8d6b-445e-aa5c-67ecfa924de7","Type":"ContainerStarted","Data":"e1eaacf2edf5daa88d8e237b774455a934411a9280e011228eb50b235c33dad8"} Jan 23 08:00:45 crc kubenswrapper[4937]: I0123 08:00:45.066200 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tr5qr" podStartSLOduration=2.5391022359999997 podStartE2EDuration="5.066157776s" podCreationTimestamp="2026-01-23 08:00:40 +0000 UTC" firstStartedPulling="2026-01-23 08:00:41.994829156 +0000 UTC m=+5241.798595809" lastFinishedPulling="2026-01-23 08:00:44.521884696 +0000 UTC m=+5244.325651349" observedRunningTime="2026-01-23 08:00:45.052103325 +0000 UTC m=+5244.855869978" watchObservedRunningTime="2026-01-23 08:00:45.066157776 +0000 UTC m=+5244.869924449" Jan 23 08:00:50 crc kubenswrapper[4937]: I0123 08:00:50.964101 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tr5qr" Jan 23 08:00:50 crc kubenswrapper[4937]: I0123 08:00:50.964709 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tr5qr" Jan 23 08:00:51 crc kubenswrapper[4937]: I0123 08:00:51.020179 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tr5qr" Jan 23 08:00:51 crc kubenswrapper[4937]: I0123 08:00:51.147090 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tr5qr" Jan 23 08:00:51 crc kubenswrapper[4937]: I0123 08:00:51.258017 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tr5qr"] Jan 23 08:00:53 crc kubenswrapper[4937]: I0123 08:00:53.110621 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tr5qr" podUID="527d789c-8d6b-445e-aa5c-67ecfa924de7" containerName="registry-server" containerID="cri-o://e1eaacf2edf5daa88d8e237b774455a934411a9280e011228eb50b235c33dad8" gracePeriod=2 Jan 23 08:00:53 crc kubenswrapper[4937]: I0123 08:00:53.592639 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tr5qr" Jan 23 08:00:53 crc kubenswrapper[4937]: I0123 08:00:53.708469 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m77m\" (UniqueName: \"kubernetes.io/projected/527d789c-8d6b-445e-aa5c-67ecfa924de7-kube-api-access-2m77m\") pod \"527d789c-8d6b-445e-aa5c-67ecfa924de7\" (UID: \"527d789c-8d6b-445e-aa5c-67ecfa924de7\") " Jan 23 08:00:53 crc kubenswrapper[4937]: I0123 08:00:53.708734 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527d789c-8d6b-445e-aa5c-67ecfa924de7-utilities\") pod \"527d789c-8d6b-445e-aa5c-67ecfa924de7\" (UID: \"527d789c-8d6b-445e-aa5c-67ecfa924de7\") " Jan 23 08:00:53 crc kubenswrapper[4937]: I0123 08:00:53.708767 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527d789c-8d6b-445e-aa5c-67ecfa924de7-catalog-content\") pod \"527d789c-8d6b-445e-aa5c-67ecfa924de7\" (UID: \"527d789c-8d6b-445e-aa5c-67ecfa924de7\") " Jan 23 08:00:53 crc kubenswrapper[4937]: I0123 08:00:53.710079 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/527d789c-8d6b-445e-aa5c-67ecfa924de7-utilities" (OuterVolumeSpecName: "utilities") pod "527d789c-8d6b-445e-aa5c-67ecfa924de7" (UID: "527d789c-8d6b-445e-aa5c-67ecfa924de7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:00:53 crc kubenswrapper[4937]: I0123 08:00:53.715731 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/527d789c-8d6b-445e-aa5c-67ecfa924de7-kube-api-access-2m77m" (OuterVolumeSpecName: "kube-api-access-2m77m") pod "527d789c-8d6b-445e-aa5c-67ecfa924de7" (UID: "527d789c-8d6b-445e-aa5c-67ecfa924de7"). InnerVolumeSpecName "kube-api-access-2m77m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:00:53 crc kubenswrapper[4937]: I0123 08:00:53.768155 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/527d789c-8d6b-445e-aa5c-67ecfa924de7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "527d789c-8d6b-445e-aa5c-67ecfa924de7" (UID: "527d789c-8d6b-445e-aa5c-67ecfa924de7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:00:53 crc kubenswrapper[4937]: I0123 08:00:53.811743 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527d789c-8d6b-445e-aa5c-67ecfa924de7-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:53 crc kubenswrapper[4937]: I0123 08:00:53.812014 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527d789c-8d6b-445e-aa5c-67ecfa924de7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:53 crc kubenswrapper[4937]: I0123 08:00:53.812231 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m77m\" (UniqueName: \"kubernetes.io/projected/527d789c-8d6b-445e-aa5c-67ecfa924de7-kube-api-access-2m77m\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:54 crc kubenswrapper[4937]: I0123 08:00:54.121485 4937 generic.go:334] "Generic (PLEG): container finished" podID="527d789c-8d6b-445e-aa5c-67ecfa924de7" containerID="e1eaacf2edf5daa88d8e237b774455a934411a9280e011228eb50b235c33dad8" exitCode=0 Jan 23 08:00:54 crc kubenswrapper[4937]: I0123 08:00:54.121527 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tr5qr" Jan 23 08:00:54 crc kubenswrapper[4937]: I0123 08:00:54.121550 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tr5qr" event={"ID":"527d789c-8d6b-445e-aa5c-67ecfa924de7","Type":"ContainerDied","Data":"e1eaacf2edf5daa88d8e237b774455a934411a9280e011228eb50b235c33dad8"} Jan 23 08:00:54 crc kubenswrapper[4937]: I0123 08:00:54.121776 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tr5qr" event={"ID":"527d789c-8d6b-445e-aa5c-67ecfa924de7","Type":"ContainerDied","Data":"5ea91e8ae1f532077c97c5eef9d2fd0015bf2ea4800b76a22e41a0f3797a9c7b"} Jan 23 08:00:54 crc kubenswrapper[4937]: I0123 08:00:54.121803 4937 scope.go:117] "RemoveContainer" containerID="e1eaacf2edf5daa88d8e237b774455a934411a9280e011228eb50b235c33dad8" Jan 23 08:00:54 crc kubenswrapper[4937]: I0123 08:00:54.161638 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tr5qr"] Jan 23 08:00:54 crc kubenswrapper[4937]: I0123 08:00:54.169917 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tr5qr"] Jan 23 08:00:54 crc kubenswrapper[4937]: I0123 08:00:54.170513 4937 scope.go:117] "RemoveContainer" containerID="f79ff47b21850c1ee6636fd446afd02fde1cbcd2770a9cfab7afeae6221f697f" Jan 23 08:00:54 crc kubenswrapper[4937]: I0123 08:00:54.198572 4937 scope.go:117] "RemoveContainer" containerID="159298cafad57a362f84e975b0ef8d5407caebc374aed500e65855e487e3ba35" Jan 23 08:00:54 crc kubenswrapper[4937]: I0123 08:00:54.258170 4937 scope.go:117] "RemoveContainer" containerID="e1eaacf2edf5daa88d8e237b774455a934411a9280e011228eb50b235c33dad8" Jan 23 08:00:54 crc kubenswrapper[4937]: E0123 08:00:54.258838 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1eaacf2edf5daa88d8e237b774455a934411a9280e011228eb50b235c33dad8\": container with ID starting with e1eaacf2edf5daa88d8e237b774455a934411a9280e011228eb50b235c33dad8 not found: ID does not exist" containerID="e1eaacf2edf5daa88d8e237b774455a934411a9280e011228eb50b235c33dad8" Jan 23 08:00:54 crc kubenswrapper[4937]: I0123 08:00:54.258880 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1eaacf2edf5daa88d8e237b774455a934411a9280e011228eb50b235c33dad8"} err="failed to get container status \"e1eaacf2edf5daa88d8e237b774455a934411a9280e011228eb50b235c33dad8\": rpc error: code = NotFound desc = could not find container \"e1eaacf2edf5daa88d8e237b774455a934411a9280e011228eb50b235c33dad8\": container with ID starting with e1eaacf2edf5daa88d8e237b774455a934411a9280e011228eb50b235c33dad8 not found: ID does not exist" Jan 23 08:00:54 crc kubenswrapper[4937]: I0123 08:00:54.259113 4937 scope.go:117] "RemoveContainer" containerID="f79ff47b21850c1ee6636fd446afd02fde1cbcd2770a9cfab7afeae6221f697f" Jan 23 08:00:54 crc kubenswrapper[4937]: E0123 08:00:54.259517 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f79ff47b21850c1ee6636fd446afd02fde1cbcd2770a9cfab7afeae6221f697f\": container with ID starting with f79ff47b21850c1ee6636fd446afd02fde1cbcd2770a9cfab7afeae6221f697f not found: ID does not exist" containerID="f79ff47b21850c1ee6636fd446afd02fde1cbcd2770a9cfab7afeae6221f697f" Jan 23 08:00:54 crc kubenswrapper[4937]: I0123 08:00:54.259545 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f79ff47b21850c1ee6636fd446afd02fde1cbcd2770a9cfab7afeae6221f697f"} err="failed to get container status \"f79ff47b21850c1ee6636fd446afd02fde1cbcd2770a9cfab7afeae6221f697f\": rpc error: code = NotFound desc = could not find container \"f79ff47b21850c1ee6636fd446afd02fde1cbcd2770a9cfab7afeae6221f697f\": container with ID starting with f79ff47b21850c1ee6636fd446afd02fde1cbcd2770a9cfab7afeae6221f697f not found: ID does not exist" Jan 23 08:00:54 crc kubenswrapper[4937]: I0123 08:00:54.259564 4937 scope.go:117] "RemoveContainer" containerID="159298cafad57a362f84e975b0ef8d5407caebc374aed500e65855e487e3ba35" Jan 23 08:00:54 crc kubenswrapper[4937]: E0123 08:00:54.259846 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"159298cafad57a362f84e975b0ef8d5407caebc374aed500e65855e487e3ba35\": container with ID starting with 159298cafad57a362f84e975b0ef8d5407caebc374aed500e65855e487e3ba35 not found: ID does not exist" containerID="159298cafad57a362f84e975b0ef8d5407caebc374aed500e65855e487e3ba35" Jan 23 08:00:54 crc kubenswrapper[4937]: I0123 08:00:54.259877 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159298cafad57a362f84e975b0ef8d5407caebc374aed500e65855e487e3ba35"} err="failed to get container status \"159298cafad57a362f84e975b0ef8d5407caebc374aed500e65855e487e3ba35\": rpc error: code = NotFound desc = could not find container \"159298cafad57a362f84e975b0ef8d5407caebc374aed500e65855e487e3ba35\": container with ID starting with 159298cafad57a362f84e975b0ef8d5407caebc374aed500e65855e487e3ba35 not found: ID does not exist" Jan 23 08:00:54 crc kubenswrapper[4937]: I0123 08:00:54.539622 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="527d789c-8d6b-445e-aa5c-67ecfa924de7" path="/var/lib/kubelet/pods/527d789c-8d6b-445e-aa5c-67ecfa924de7/volumes" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.168176 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29485921-g5gmn"] Jan 23 08:01:00 crc kubenswrapper[4937]: E0123 08:01:00.169444 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527d789c-8d6b-445e-aa5c-67ecfa924de7" containerName="extract-content" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.169466 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="527d789c-8d6b-445e-aa5c-67ecfa924de7" containerName="extract-content" Jan 23 08:01:00 crc kubenswrapper[4937]: E0123 08:01:00.169493 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527d789c-8d6b-445e-aa5c-67ecfa924de7" containerName="registry-server" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.169504 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="527d789c-8d6b-445e-aa5c-67ecfa924de7" containerName="registry-server" Jan 23 08:01:00 crc kubenswrapper[4937]: E0123 08:01:00.169526 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bcd8824-9744-4374-a6c7-7a5eca003174" containerName="registry-server" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.169536 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bcd8824-9744-4374-a6c7-7a5eca003174" containerName="registry-server" Jan 23 08:01:00 crc kubenswrapper[4937]: E0123 08:01:00.169555 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bcd8824-9744-4374-a6c7-7a5eca003174" containerName="extract-content" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.169565 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bcd8824-9744-4374-a6c7-7a5eca003174" containerName="extract-content" Jan 23 08:01:00 crc kubenswrapper[4937]: E0123 08:01:00.169627 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527d789c-8d6b-445e-aa5c-67ecfa924de7" containerName="extract-utilities" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.169639 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="527d789c-8d6b-445e-aa5c-67ecfa924de7" containerName="extract-utilities" Jan 23 08:01:00 crc kubenswrapper[4937]: E0123 08:01:00.169664 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bcd8824-9744-4374-a6c7-7a5eca003174" containerName="extract-utilities" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.169677 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bcd8824-9744-4374-a6c7-7a5eca003174" containerName="extract-utilities" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.170001 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="527d789c-8d6b-445e-aa5c-67ecfa924de7" containerName="registry-server" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.170027 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bcd8824-9744-4374-a6c7-7a5eca003174" containerName="registry-server" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.171167 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485921-g5gmn" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.181056 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29485921-g5gmn"] Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.255067 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb1979d8-d92d-41da-9fea-452cec7794fb-fernet-keys\") pod \"keystone-cron-29485921-g5gmn\" (UID: \"eb1979d8-d92d-41da-9fea-452cec7794fb\") " pod="openstack/keystone-cron-29485921-g5gmn" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.255408 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5lws\" (UniqueName: \"kubernetes.io/projected/eb1979d8-d92d-41da-9fea-452cec7794fb-kube-api-access-l5lws\") pod \"keystone-cron-29485921-g5gmn\" (UID: \"eb1979d8-d92d-41da-9fea-452cec7794fb\") " pod="openstack/keystone-cron-29485921-g5gmn" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.255625 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1979d8-d92d-41da-9fea-452cec7794fb-config-data\") pod \"keystone-cron-29485921-g5gmn\" (UID: \"eb1979d8-d92d-41da-9fea-452cec7794fb\") " pod="openstack/keystone-cron-29485921-g5gmn" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.255768 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1979d8-d92d-41da-9fea-452cec7794fb-combined-ca-bundle\") pod \"keystone-cron-29485921-g5gmn\" (UID: \"eb1979d8-d92d-41da-9fea-452cec7794fb\") " pod="openstack/keystone-cron-29485921-g5gmn" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.358199 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb1979d8-d92d-41da-9fea-452cec7794fb-fernet-keys\") pod \"keystone-cron-29485921-g5gmn\" (UID: \"eb1979d8-d92d-41da-9fea-452cec7794fb\") " pod="openstack/keystone-cron-29485921-g5gmn" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.358295 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5lws\" (UniqueName: \"kubernetes.io/projected/eb1979d8-d92d-41da-9fea-452cec7794fb-kube-api-access-l5lws\") pod \"keystone-cron-29485921-g5gmn\" (UID: \"eb1979d8-d92d-41da-9fea-452cec7794fb\") " pod="openstack/keystone-cron-29485921-g5gmn" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.358391 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1979d8-d92d-41da-9fea-452cec7794fb-config-data\") pod \"keystone-cron-29485921-g5gmn\" (UID: \"eb1979d8-d92d-41da-9fea-452cec7794fb\") " pod="openstack/keystone-cron-29485921-g5gmn" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.358427 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1979d8-d92d-41da-9fea-452cec7794fb-combined-ca-bundle\") pod \"keystone-cron-29485921-g5gmn\" (UID: \"eb1979d8-d92d-41da-9fea-452cec7794fb\") " pod="openstack/keystone-cron-29485921-g5gmn" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.365116 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1979d8-d92d-41da-9fea-452cec7794fb-combined-ca-bundle\") pod \"keystone-cron-29485921-g5gmn\" (UID: \"eb1979d8-d92d-41da-9fea-452cec7794fb\") " pod="openstack/keystone-cron-29485921-g5gmn" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.365612 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb1979d8-d92d-41da-9fea-452cec7794fb-fernet-keys\") pod \"keystone-cron-29485921-g5gmn\" (UID: \"eb1979d8-d92d-41da-9fea-452cec7794fb\") " pod="openstack/keystone-cron-29485921-g5gmn" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.367388 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1979d8-d92d-41da-9fea-452cec7794fb-config-data\") pod \"keystone-cron-29485921-g5gmn\" (UID: \"eb1979d8-d92d-41da-9fea-452cec7794fb\") " pod="openstack/keystone-cron-29485921-g5gmn" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.374014 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5lws\" (UniqueName: \"kubernetes.io/projected/eb1979d8-d92d-41da-9fea-452cec7794fb-kube-api-access-l5lws\") pod \"keystone-cron-29485921-g5gmn\" (UID: \"eb1979d8-d92d-41da-9fea-452cec7794fb\") " pod="openstack/keystone-cron-29485921-g5gmn" Jan 23 08:01:00 crc kubenswrapper[4937]: I0123 08:01:00.526364 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485921-g5gmn" Jan 23 08:01:01 crc kubenswrapper[4937]: I0123 08:01:01.664563 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29485921-g5gmn"] Jan 23 08:01:02 crc kubenswrapper[4937]: I0123 08:01:02.206360 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485921-g5gmn" event={"ID":"eb1979d8-d92d-41da-9fea-452cec7794fb","Type":"ContainerStarted","Data":"a9fdf47ca20f750ba54d80fbcd96357f435459d6a130653033ad04cc8630ff44"} Jan 23 08:01:03 crc kubenswrapper[4937]: I0123 08:01:03.217048 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485921-g5gmn" event={"ID":"eb1979d8-d92d-41da-9fea-452cec7794fb","Type":"ContainerStarted","Data":"b7d02e1a594c8ccc8601154d276ec576ecaa593c8660c4e90f58f39207361c21"} Jan 23 08:01:03 crc kubenswrapper[4937]: I0123 08:01:03.239493 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29485921-g5gmn" podStartSLOduration=3.239475169 podStartE2EDuration="3.239475169s" podCreationTimestamp="2026-01-23 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:01:03.235142831 +0000 UTC m=+5263.038909494" watchObservedRunningTime="2026-01-23 08:01:03.239475169 +0000 UTC m=+5263.043241822" Jan 23 08:01:07 crc kubenswrapper[4937]: E0123 08:01:07.385787 4937 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb1979d8_d92d_41da_9fea_452cec7794fb.slice/crio-b7d02e1a594c8ccc8601154d276ec576ecaa593c8660c4e90f58f39207361c21.scope\": RecentStats: unable to find data in memory cache]" Jan 23 08:01:07 crc kubenswrapper[4937]: I0123 08:01:07.724653 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:01:07 crc kubenswrapper[4937]: I0123 08:01:07.725090 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:01:08 crc kubenswrapper[4937]: I0123 08:01:08.275266 4937 generic.go:334] "Generic (PLEG): container finished" podID="eb1979d8-d92d-41da-9fea-452cec7794fb" containerID="b7d02e1a594c8ccc8601154d276ec576ecaa593c8660c4e90f58f39207361c21" exitCode=0 Jan 23 08:01:08 crc kubenswrapper[4937]: I0123 08:01:08.275413 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485921-g5gmn" event={"ID":"eb1979d8-d92d-41da-9fea-452cec7794fb","Type":"ContainerDied","Data":"b7d02e1a594c8ccc8601154d276ec576ecaa593c8660c4e90f58f39207361c21"} Jan 23 08:01:09 crc kubenswrapper[4937]: I0123 08:01:09.646300 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485921-g5gmn" Jan 23 08:01:09 crc kubenswrapper[4937]: I0123 08:01:09.833947 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1979d8-d92d-41da-9fea-452cec7794fb-config-data\") pod \"eb1979d8-d92d-41da-9fea-452cec7794fb\" (UID: \"eb1979d8-d92d-41da-9fea-452cec7794fb\") " Jan 23 08:01:09 crc kubenswrapper[4937]: I0123 08:01:09.834234 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb1979d8-d92d-41da-9fea-452cec7794fb-fernet-keys\") pod \"eb1979d8-d92d-41da-9fea-452cec7794fb\" (UID: \"eb1979d8-d92d-41da-9fea-452cec7794fb\") " Jan 23 08:01:09 crc kubenswrapper[4937]: I0123 08:01:09.834409 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5lws\" (UniqueName: \"kubernetes.io/projected/eb1979d8-d92d-41da-9fea-452cec7794fb-kube-api-access-l5lws\") pod \"eb1979d8-d92d-41da-9fea-452cec7794fb\" (UID: \"eb1979d8-d92d-41da-9fea-452cec7794fb\") " Jan 23 08:01:09 crc kubenswrapper[4937]: I0123 08:01:09.834464 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1979d8-d92d-41da-9fea-452cec7794fb-combined-ca-bundle\") pod \"eb1979d8-d92d-41da-9fea-452cec7794fb\" (UID: \"eb1979d8-d92d-41da-9fea-452cec7794fb\") " Jan 23 08:01:09 crc kubenswrapper[4937]: I0123 08:01:09.841653 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb1979d8-d92d-41da-9fea-452cec7794fb-kube-api-access-l5lws" (OuterVolumeSpecName: "kube-api-access-l5lws") pod "eb1979d8-d92d-41da-9fea-452cec7794fb" (UID: "eb1979d8-d92d-41da-9fea-452cec7794fb"). InnerVolumeSpecName "kube-api-access-l5lws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:01:09 crc kubenswrapper[4937]: I0123 08:01:09.841979 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1979d8-d92d-41da-9fea-452cec7794fb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "eb1979d8-d92d-41da-9fea-452cec7794fb" (UID: "eb1979d8-d92d-41da-9fea-452cec7794fb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:01:09 crc kubenswrapper[4937]: I0123 08:01:09.866585 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1979d8-d92d-41da-9fea-452cec7794fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb1979d8-d92d-41da-9fea-452cec7794fb" (UID: "eb1979d8-d92d-41da-9fea-452cec7794fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:01:09 crc kubenswrapper[4937]: I0123 08:01:09.905872 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb1979d8-d92d-41da-9fea-452cec7794fb-config-data" (OuterVolumeSpecName: "config-data") pod "eb1979d8-d92d-41da-9fea-452cec7794fb" (UID: "eb1979d8-d92d-41da-9fea-452cec7794fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:01:09 crc kubenswrapper[4937]: I0123 08:01:09.937406 4937 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb1979d8-d92d-41da-9fea-452cec7794fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:01:09 crc kubenswrapper[4937]: I0123 08:01:09.937445 4937 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb1979d8-d92d-41da-9fea-452cec7794fb-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 08:01:09 crc kubenswrapper[4937]: I0123 08:01:09.937455 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5lws\" (UniqueName: \"kubernetes.io/projected/eb1979d8-d92d-41da-9fea-452cec7794fb-kube-api-access-l5lws\") on node \"crc\" DevicePath \"\"" Jan 23 08:01:09 crc kubenswrapper[4937]: I0123 08:01:09.937464 4937 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb1979d8-d92d-41da-9fea-452cec7794fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:01:10 crc kubenswrapper[4937]: I0123 08:01:10.322975 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29485921-g5gmn" event={"ID":"eb1979d8-d92d-41da-9fea-452cec7794fb","Type":"ContainerDied","Data":"a9fdf47ca20f750ba54d80fbcd96357f435459d6a130653033ad04cc8630ff44"} Jan 23 08:01:10 crc kubenswrapper[4937]: I0123 08:01:10.323040 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9fdf47ca20f750ba54d80fbcd96357f435459d6a130653033ad04cc8630ff44" Jan 23 08:01:10 crc kubenswrapper[4937]: I0123 08:01:10.323170 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29485921-g5gmn" Jan 23 08:01:37 crc kubenswrapper[4937]: I0123 08:01:37.724508 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:01:37 crc kubenswrapper[4937]: I0123 08:01:37.725061 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:02:07 crc kubenswrapper[4937]: I0123 08:02:07.724039 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:02:07 crc kubenswrapper[4937]: I0123 08:02:07.725768 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:02:07 crc kubenswrapper[4937]: I0123 08:02:07.725896 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 08:02:07 crc kubenswrapper[4937]: I0123 08:02:07.726737 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:02:07 crc kubenswrapper[4937]: I0123 08:02:07.726885 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" gracePeriod=600 Jan 23 08:02:07 crc kubenswrapper[4937]: I0123 08:02:07.852075 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" exitCode=0 Jan 23 08:02:07 crc kubenswrapper[4937]: I0123 08:02:07.852320 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae"} Jan 23 08:02:07 crc kubenswrapper[4937]: I0123 08:02:07.852521 4937 scope.go:117] "RemoveContainer" containerID="2750c8793978e6751989bf1064df680550e4c785e14b6869bda1eb315f35a26e" Jan 23 08:02:07 crc kubenswrapper[4937]: E0123 08:02:07.853952 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:02:08 crc kubenswrapper[4937]: I0123 08:02:08.862807 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:02:08 crc kubenswrapper[4937]: E0123 08:02:08.863199 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:02:23 crc kubenswrapper[4937]: I0123 08:02:23.526442 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:02:23 crc kubenswrapper[4937]: E0123 08:02:23.527136 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:02:35 crc kubenswrapper[4937]: I0123 08:02:35.531505 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:02:35 crc kubenswrapper[4937]: E0123 08:02:35.532950 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:02:49 crc kubenswrapper[4937]: I0123 08:02:49.526643 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:02:49 crc kubenswrapper[4937]: E0123 08:02:49.527266 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:03:00 crc kubenswrapper[4937]: I0123 08:03:00.535331 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:03:00 crc kubenswrapper[4937]: E0123 08:03:00.536578 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:03:11 crc kubenswrapper[4937]: I0123 08:03:11.526301 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:03:11 crc kubenswrapper[4937]: E0123 08:03:11.527164 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:03:22 crc kubenswrapper[4937]: I0123 08:03:22.526785 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:03:22 crc kubenswrapper[4937]: E0123 08:03:22.528879 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:03:35 crc kubenswrapper[4937]: I0123 08:03:35.526396 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:03:35 crc kubenswrapper[4937]: E0123 08:03:35.527274 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:03:47 crc kubenswrapper[4937]: I0123 08:03:47.526569 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:03:47 crc kubenswrapper[4937]: E0123 08:03:47.527499 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:03:59 crc kubenswrapper[4937]: I0123 08:03:59.526450 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:03:59 crc kubenswrapper[4937]: E0123 08:03:59.527177 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:04:12 crc kubenswrapper[4937]: I0123 08:04:12.526947 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:04:12 crc kubenswrapper[4937]: E0123 08:04:12.527636 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:04:24 crc kubenswrapper[4937]: I0123 08:04:24.526393 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:04:24 crc kubenswrapper[4937]: E0123 08:04:24.527155 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:04:39 crc kubenswrapper[4937]: I0123 08:04:39.527154 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:04:39 crc kubenswrapper[4937]: E0123 08:04:39.529693 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:04:52 crc kubenswrapper[4937]: I0123 08:04:52.529126 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:04:52 crc kubenswrapper[4937]: E0123 08:04:52.532774 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:05:03 crc kubenswrapper[4937]: I0123 08:05:03.527137 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:05:03 crc kubenswrapper[4937]: E0123 08:05:03.527987 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:05:18 crc kubenswrapper[4937]: I0123 08:05:18.526831 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:05:18 crc kubenswrapper[4937]: E0123 08:05:18.527622 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:05:30 crc kubenswrapper[4937]: I0123 08:05:30.538494 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:05:30 crc kubenswrapper[4937]: E0123 08:05:30.540205 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:05:43 crc kubenswrapper[4937]: I0123 08:05:43.527023 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:05:43 crc kubenswrapper[4937]: E0123 08:05:43.527863 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:05:57 crc kubenswrapper[4937]: I0123 08:05:57.529908 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:05:57 crc kubenswrapper[4937]: E0123 08:05:57.530990 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:06:10 crc kubenswrapper[4937]: I0123 08:06:10.526608 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:06:10 crc kubenswrapper[4937]: E0123 08:06:10.527419 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:06:22 crc kubenswrapper[4937]: I0123 08:06:22.526976 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:06:22 crc kubenswrapper[4937]: E0123 08:06:22.527749 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:06:35 crc kubenswrapper[4937]: I0123 08:06:35.526511 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:06:35 crc kubenswrapper[4937]: E0123 08:06:35.527726 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:06:48 crc kubenswrapper[4937]: I0123 08:06:48.525870 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:06:48 crc kubenswrapper[4937]: E0123 08:06:48.526634 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:07:00 crc kubenswrapper[4937]: I0123 08:07:00.407004 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g4lp4"] Jan 23 08:07:00 crc kubenswrapper[4937]: E0123 08:07:00.408027 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb1979d8-d92d-41da-9fea-452cec7794fb" containerName="keystone-cron" Jan 23 08:07:00 crc kubenswrapper[4937]: I0123 08:07:00.408045 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb1979d8-d92d-41da-9fea-452cec7794fb" containerName="keystone-cron" Jan 23 08:07:00 crc kubenswrapper[4937]: I0123 08:07:00.408280 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb1979d8-d92d-41da-9fea-452cec7794fb" containerName="keystone-cron" Jan 23 08:07:00 crc kubenswrapper[4937]: I0123 08:07:00.412879 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4lp4" Jan 23 08:07:00 crc kubenswrapper[4937]: I0123 08:07:00.425310 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4lp4"] Jan 23 08:07:00 crc kubenswrapper[4937]: I0123 08:07:00.545768 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c4c797c-7f14-4439-8eaf-f979bf6a3c60-catalog-content\") pod \"redhat-marketplace-g4lp4\" (UID: \"4c4c797c-7f14-4439-8eaf-f979bf6a3c60\") " pod="openshift-marketplace/redhat-marketplace-g4lp4" Jan 23 08:07:00 crc kubenswrapper[4937]: I0123 08:07:00.545989 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c4c797c-7f14-4439-8eaf-f979bf6a3c60-utilities\") pod \"redhat-marketplace-g4lp4\" (UID: \"4c4c797c-7f14-4439-8eaf-f979bf6a3c60\") " pod="openshift-marketplace/redhat-marketplace-g4lp4" Jan 23 08:07:00 crc kubenswrapper[4937]: I0123 08:07:00.546151 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlvtz\" (UniqueName: \"kubernetes.io/projected/4c4c797c-7f14-4439-8eaf-f979bf6a3c60-kube-api-access-nlvtz\") pod \"redhat-marketplace-g4lp4\" (UID: \"4c4c797c-7f14-4439-8eaf-f979bf6a3c60\") " pod="openshift-marketplace/redhat-marketplace-g4lp4" Jan 23 08:07:00 crc kubenswrapper[4937]: I0123 08:07:00.647946 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c4c797c-7f14-4439-8eaf-f979bf6a3c60-catalog-content\") pod \"redhat-marketplace-g4lp4\" (UID: \"4c4c797c-7f14-4439-8eaf-f979bf6a3c60\") " pod="openshift-marketplace/redhat-marketplace-g4lp4" Jan 23 08:07:00 crc kubenswrapper[4937]: I0123 08:07:00.648110 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c4c797c-7f14-4439-8eaf-f979bf6a3c60-utilities\") pod \"redhat-marketplace-g4lp4\" (UID: \"4c4c797c-7f14-4439-8eaf-f979bf6a3c60\") " pod="openshift-marketplace/redhat-marketplace-g4lp4" Jan 23 08:07:00 crc kubenswrapper[4937]: I0123 08:07:00.648378 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlvtz\" (UniqueName: \"kubernetes.io/projected/4c4c797c-7f14-4439-8eaf-f979bf6a3c60-kube-api-access-nlvtz\") pod \"redhat-marketplace-g4lp4\" (UID: \"4c4c797c-7f14-4439-8eaf-f979bf6a3c60\") " pod="openshift-marketplace/redhat-marketplace-g4lp4" Jan 23 08:07:00 crc kubenswrapper[4937]: I0123 08:07:00.649137 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c4c797c-7f14-4439-8eaf-f979bf6a3c60-utilities\") pod \"redhat-marketplace-g4lp4\" (UID: \"4c4c797c-7f14-4439-8eaf-f979bf6a3c60\") " pod="openshift-marketplace/redhat-marketplace-g4lp4" Jan 23 08:07:00 crc kubenswrapper[4937]: I0123 08:07:00.649429 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c4c797c-7f14-4439-8eaf-f979bf6a3c60-catalog-content\") pod \"redhat-marketplace-g4lp4\" (UID: \"4c4c797c-7f14-4439-8eaf-f979bf6a3c60\") " pod="openshift-marketplace/redhat-marketplace-g4lp4" Jan 23 08:07:00 crc kubenswrapper[4937]: I0123 08:07:00.669477 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlvtz\" (UniqueName: \"kubernetes.io/projected/4c4c797c-7f14-4439-8eaf-f979bf6a3c60-kube-api-access-nlvtz\") pod \"redhat-marketplace-g4lp4\" (UID: \"4c4c797c-7f14-4439-8eaf-f979bf6a3c60\") " pod="openshift-marketplace/redhat-marketplace-g4lp4" Jan 23 08:07:00 crc kubenswrapper[4937]: I0123 08:07:00.733841 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4lp4" Jan 23 08:07:01 crc kubenswrapper[4937]: I0123 08:07:01.225078 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4lp4"] Jan 23 08:07:01 crc kubenswrapper[4937]: I0123 08:07:01.594571 4937 generic.go:334] "Generic (PLEG): container finished" podID="4c4c797c-7f14-4439-8eaf-f979bf6a3c60" containerID="59b4facab21f3140082d31257985b88f20e9ab6d4067c050cee3a3a713621142" exitCode=0 Jan 23 08:07:01 crc kubenswrapper[4937]: I0123 08:07:01.594647 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4lp4" event={"ID":"4c4c797c-7f14-4439-8eaf-f979bf6a3c60","Type":"ContainerDied","Data":"59b4facab21f3140082d31257985b88f20e9ab6d4067c050cee3a3a713621142"} Jan 23 08:07:01 crc kubenswrapper[4937]: I0123 08:07:01.594682 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4lp4" event={"ID":"4c4c797c-7f14-4439-8eaf-f979bf6a3c60","Type":"ContainerStarted","Data":"96325e5b380a25b1864531e539d10adcb7f66fdb4e23f13e7e7aa44976aa2235"} Jan 23 08:07:01 crc kubenswrapper[4937]: I0123 08:07:01.596977 4937 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:07:02 crc kubenswrapper[4937]: I0123 08:07:02.605787 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4lp4" event={"ID":"4c4c797c-7f14-4439-8eaf-f979bf6a3c60","Type":"ContainerStarted","Data":"f92a5c6f43487badad489460bad564ecab7ba5ca0aaad11a952f72259ab3cfb1"} Jan 23 08:07:03 crc kubenswrapper[4937]: I0123 08:07:03.526338 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:07:03 crc kubenswrapper[4937]: E0123 08:07:03.527142 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:07:03 crc kubenswrapper[4937]: I0123 08:07:03.616692 4937 generic.go:334] "Generic (PLEG): container finished" podID="4c4c797c-7f14-4439-8eaf-f979bf6a3c60" containerID="f92a5c6f43487badad489460bad564ecab7ba5ca0aaad11a952f72259ab3cfb1" exitCode=0 Jan 23 08:07:03 crc kubenswrapper[4937]: I0123 08:07:03.616735 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4lp4" event={"ID":"4c4c797c-7f14-4439-8eaf-f979bf6a3c60","Type":"ContainerDied","Data":"f92a5c6f43487badad489460bad564ecab7ba5ca0aaad11a952f72259ab3cfb1"} Jan 23 08:07:04 crc kubenswrapper[4937]: I0123 08:07:04.627869 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4lp4" event={"ID":"4c4c797c-7f14-4439-8eaf-f979bf6a3c60","Type":"ContainerStarted","Data":"0e3e33f7cc769da278b8bef185f85848c98d702924ff5c42be455f374ba5e6bf"} Jan 23 08:07:04 crc kubenswrapper[4937]: I0123 08:07:04.662287 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g4lp4" podStartSLOduration=2.207730457 podStartE2EDuration="4.662267727s" podCreationTimestamp="2026-01-23 08:07:00 +0000 UTC" firstStartedPulling="2026-01-23 08:07:01.596773854 +0000 UTC m=+5621.400540507" lastFinishedPulling="2026-01-23 08:07:04.051311124 +0000 UTC m=+5623.855077777" observedRunningTime="2026-01-23 08:07:04.656906861 +0000 UTC m=+5624.460673524" watchObservedRunningTime="2026-01-23 08:07:04.662267727 +0000 UTC m=+5624.466034380" Jan 23 08:07:10 crc kubenswrapper[4937]: I0123 08:07:10.734469 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g4lp4" Jan 23 08:07:10 crc kubenswrapper[4937]: I0123 08:07:10.735008 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g4lp4" Jan 23 08:07:10 crc kubenswrapper[4937]: I0123 08:07:10.780388 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g4lp4" Jan 23 08:07:11 crc kubenswrapper[4937]: I0123 08:07:11.748848 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g4lp4" Jan 23 08:07:11 crc kubenswrapper[4937]: I0123 08:07:11.801442 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4lp4"] Jan 23 08:07:13 crc kubenswrapper[4937]: I0123 08:07:13.708814 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g4lp4" podUID="4c4c797c-7f14-4439-8eaf-f979bf6a3c60" containerName="registry-server" containerID="cri-o://0e3e33f7cc769da278b8bef185f85848c98d702924ff5c42be455f374ba5e6bf" gracePeriod=2 Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.170465 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4lp4" Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.335289 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlvtz\" (UniqueName: \"kubernetes.io/projected/4c4c797c-7f14-4439-8eaf-f979bf6a3c60-kube-api-access-nlvtz\") pod \"4c4c797c-7f14-4439-8eaf-f979bf6a3c60\" (UID: \"4c4c797c-7f14-4439-8eaf-f979bf6a3c60\") " Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.335360 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c4c797c-7f14-4439-8eaf-f979bf6a3c60-catalog-content\") pod \"4c4c797c-7f14-4439-8eaf-f979bf6a3c60\" (UID: \"4c4c797c-7f14-4439-8eaf-f979bf6a3c60\") " Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.335408 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c4c797c-7f14-4439-8eaf-f979bf6a3c60-utilities\") pod \"4c4c797c-7f14-4439-8eaf-f979bf6a3c60\" (UID: \"4c4c797c-7f14-4439-8eaf-f979bf6a3c60\") " Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.336102 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c4c797c-7f14-4439-8eaf-f979bf6a3c60-utilities" (OuterVolumeSpecName: "utilities") pod "4c4c797c-7f14-4439-8eaf-f979bf6a3c60" (UID: "4c4c797c-7f14-4439-8eaf-f979bf6a3c60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.336283 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c4c797c-7f14-4439-8eaf-f979bf6a3c60-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.344701 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c4c797c-7f14-4439-8eaf-f979bf6a3c60-kube-api-access-nlvtz" (OuterVolumeSpecName: "kube-api-access-nlvtz") pod "4c4c797c-7f14-4439-8eaf-f979bf6a3c60" (UID: "4c4c797c-7f14-4439-8eaf-f979bf6a3c60"). InnerVolumeSpecName "kube-api-access-nlvtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.366798 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c4c797c-7f14-4439-8eaf-f979bf6a3c60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c4c797c-7f14-4439-8eaf-f979bf6a3c60" (UID: "4c4c797c-7f14-4439-8eaf-f979bf6a3c60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.438542 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlvtz\" (UniqueName: \"kubernetes.io/projected/4c4c797c-7f14-4439-8eaf-f979bf6a3c60-kube-api-access-nlvtz\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.438572 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c4c797c-7f14-4439-8eaf-f979bf6a3c60-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.526267 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.719769 4937 generic.go:334] "Generic (PLEG): container finished" podID="4c4c797c-7f14-4439-8eaf-f979bf6a3c60" containerID="0e3e33f7cc769da278b8bef185f85848c98d702924ff5c42be455f374ba5e6bf" exitCode=0 Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.719823 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4lp4" event={"ID":"4c4c797c-7f14-4439-8eaf-f979bf6a3c60","Type":"ContainerDied","Data":"0e3e33f7cc769da278b8bef185f85848c98d702924ff5c42be455f374ba5e6bf"} Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.719845 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4lp4" Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.719865 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4lp4" event={"ID":"4c4c797c-7f14-4439-8eaf-f979bf6a3c60","Type":"ContainerDied","Data":"96325e5b380a25b1864531e539d10adcb7f66fdb4e23f13e7e7aa44976aa2235"} Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.719888 4937 scope.go:117] "RemoveContainer" containerID="0e3e33f7cc769da278b8bef185f85848c98d702924ff5c42be455f374ba5e6bf" Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.750113 4937 scope.go:117] "RemoveContainer" containerID="f92a5c6f43487badad489460bad564ecab7ba5ca0aaad11a952f72259ab3cfb1" Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.750276 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4lp4"] Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.765329 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4lp4"] Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.774191 4937 scope.go:117] "RemoveContainer" containerID="59b4facab21f3140082d31257985b88f20e9ab6d4067c050cee3a3a713621142" Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.793556 4937 scope.go:117] "RemoveContainer" containerID="0e3e33f7cc769da278b8bef185f85848c98d702924ff5c42be455f374ba5e6bf" Jan 23 08:07:14 crc kubenswrapper[4937]: E0123 08:07:14.794004 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e3e33f7cc769da278b8bef185f85848c98d702924ff5c42be455f374ba5e6bf\": container with ID starting with 0e3e33f7cc769da278b8bef185f85848c98d702924ff5c42be455f374ba5e6bf not found: ID does not exist" containerID="0e3e33f7cc769da278b8bef185f85848c98d702924ff5c42be455f374ba5e6bf" Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.794037 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e3e33f7cc769da278b8bef185f85848c98d702924ff5c42be455f374ba5e6bf"} err="failed to get container status \"0e3e33f7cc769da278b8bef185f85848c98d702924ff5c42be455f374ba5e6bf\": rpc error: code = NotFound desc = could not find container \"0e3e33f7cc769da278b8bef185f85848c98d702924ff5c42be455f374ba5e6bf\": container with ID starting with 0e3e33f7cc769da278b8bef185f85848c98d702924ff5c42be455f374ba5e6bf not found: ID does not exist" Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.794058 4937 scope.go:117] "RemoveContainer" containerID="f92a5c6f43487badad489460bad564ecab7ba5ca0aaad11a952f72259ab3cfb1" Jan 23 08:07:14 crc kubenswrapper[4937]: E0123 08:07:14.794551 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f92a5c6f43487badad489460bad564ecab7ba5ca0aaad11a952f72259ab3cfb1\": container with ID starting with f92a5c6f43487badad489460bad564ecab7ba5ca0aaad11a952f72259ab3cfb1 not found: ID does not exist" containerID="f92a5c6f43487badad489460bad564ecab7ba5ca0aaad11a952f72259ab3cfb1" Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.794578 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f92a5c6f43487badad489460bad564ecab7ba5ca0aaad11a952f72259ab3cfb1"} err="failed to get container status \"f92a5c6f43487badad489460bad564ecab7ba5ca0aaad11a952f72259ab3cfb1\": rpc error: code = NotFound desc = could not find container \"f92a5c6f43487badad489460bad564ecab7ba5ca0aaad11a952f72259ab3cfb1\": container with ID starting with f92a5c6f43487badad489460bad564ecab7ba5ca0aaad11a952f72259ab3cfb1 not found: ID does not exist" Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.794610 4937 scope.go:117] "RemoveContainer" containerID="59b4facab21f3140082d31257985b88f20e9ab6d4067c050cee3a3a713621142" Jan 23 08:07:14 crc kubenswrapper[4937]: E0123 08:07:14.794943 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59b4facab21f3140082d31257985b88f20e9ab6d4067c050cee3a3a713621142\": container with ID starting with 59b4facab21f3140082d31257985b88f20e9ab6d4067c050cee3a3a713621142 not found: ID does not exist" containerID="59b4facab21f3140082d31257985b88f20e9ab6d4067c050cee3a3a713621142" Jan 23 08:07:14 crc kubenswrapper[4937]: I0123 08:07:14.794973 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59b4facab21f3140082d31257985b88f20e9ab6d4067c050cee3a3a713621142"} err="failed to get container status \"59b4facab21f3140082d31257985b88f20e9ab6d4067c050cee3a3a713621142\": rpc error: code = NotFound desc = could not find container \"59b4facab21f3140082d31257985b88f20e9ab6d4067c050cee3a3a713621142\": container with ID starting with 59b4facab21f3140082d31257985b88f20e9ab6d4067c050cee3a3a713621142 not found: ID does not exist" Jan 23 08:07:15 crc kubenswrapper[4937]: I0123 08:07:15.731890 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"30a2a07eaf2a2724f82eb3b0ae80dbd8a7c9690d9e2b1a0374ce255e1044ecea"} Jan 23 08:07:16 crc kubenswrapper[4937]: I0123 08:07:16.560642 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c4c797c-7f14-4439-8eaf-f979bf6a3c60" path="/var/lib/kubelet/pods/4c4c797c-7f14-4439-8eaf-f979bf6a3c60/volumes" Jan 23 08:08:44 crc kubenswrapper[4937]: I0123 08:08:44.313790 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pdmm2"] Jan 23 08:08:44 crc kubenswrapper[4937]: E0123 08:08:44.314782 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c4c797c-7f14-4439-8eaf-f979bf6a3c60" containerName="extract-content" Jan 23 08:08:44 crc kubenswrapper[4937]: I0123 08:08:44.314795 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c4c797c-7f14-4439-8eaf-f979bf6a3c60" containerName="extract-content" Jan 23 08:08:44 crc kubenswrapper[4937]: E0123 08:08:44.314841 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c4c797c-7f14-4439-8eaf-f979bf6a3c60" containerName="extract-utilities" Jan 23 08:08:44 crc kubenswrapper[4937]: I0123 08:08:44.314848 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c4c797c-7f14-4439-8eaf-f979bf6a3c60" containerName="extract-utilities" Jan 23 08:08:44 crc kubenswrapper[4937]: E0123 08:08:44.314862 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c4c797c-7f14-4439-8eaf-f979bf6a3c60" containerName="registry-server" Jan 23 08:08:44 crc kubenswrapper[4937]: I0123 08:08:44.314868 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c4c797c-7f14-4439-8eaf-f979bf6a3c60" containerName="registry-server" Jan 23 08:08:44 crc kubenswrapper[4937]: I0123 08:08:44.315077 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c4c797c-7f14-4439-8eaf-f979bf6a3c60" containerName="registry-server" Jan 23 08:08:44 crc kubenswrapper[4937]: I0123 08:08:44.316477 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdmm2" Jan 23 08:08:44 crc kubenswrapper[4937]: I0123 08:08:44.325719 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pdmm2"] Jan 23 08:08:44 crc kubenswrapper[4937]: I0123 08:08:44.441874 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d04aacd-aa44-4559-b72b-4ed9c4762bc0-utilities\") pod \"community-operators-pdmm2\" (UID: \"6d04aacd-aa44-4559-b72b-4ed9c4762bc0\") " pod="openshift-marketplace/community-operators-pdmm2" Jan 23 08:08:44 crc kubenswrapper[4937]: I0123 08:08:44.442176 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kn9r\" (UniqueName: \"kubernetes.io/projected/6d04aacd-aa44-4559-b72b-4ed9c4762bc0-kube-api-access-8kn9r\") pod \"community-operators-pdmm2\" (UID: \"6d04aacd-aa44-4559-b72b-4ed9c4762bc0\") " pod="openshift-marketplace/community-operators-pdmm2" Jan 23 08:08:44 crc kubenswrapper[4937]: I0123 08:08:44.442274 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d04aacd-aa44-4559-b72b-4ed9c4762bc0-catalog-content\") pod \"community-operators-pdmm2\" (UID: \"6d04aacd-aa44-4559-b72b-4ed9c4762bc0\") " pod="openshift-marketplace/community-operators-pdmm2" Jan 23 08:08:44 crc kubenswrapper[4937]: I0123 08:08:44.544046 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d04aacd-aa44-4559-b72b-4ed9c4762bc0-utilities\") pod \"community-operators-pdmm2\" (UID: \"6d04aacd-aa44-4559-b72b-4ed9c4762bc0\") " pod="openshift-marketplace/community-operators-pdmm2" Jan 23 08:08:44 crc kubenswrapper[4937]: I0123 08:08:44.544110 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kn9r\" (UniqueName: \"kubernetes.io/projected/6d04aacd-aa44-4559-b72b-4ed9c4762bc0-kube-api-access-8kn9r\") pod \"community-operators-pdmm2\" (UID: \"6d04aacd-aa44-4559-b72b-4ed9c4762bc0\") " pod="openshift-marketplace/community-operators-pdmm2" Jan 23 08:08:44 crc kubenswrapper[4937]: I0123 08:08:44.544219 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d04aacd-aa44-4559-b72b-4ed9c4762bc0-catalog-content\") pod \"community-operators-pdmm2\" (UID: \"6d04aacd-aa44-4559-b72b-4ed9c4762bc0\") " pod="openshift-marketplace/community-operators-pdmm2" Jan 23 08:08:44 crc kubenswrapper[4937]: I0123 08:08:44.544872 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d04aacd-aa44-4559-b72b-4ed9c4762bc0-catalog-content\") pod \"community-operators-pdmm2\" (UID: \"6d04aacd-aa44-4559-b72b-4ed9c4762bc0\") " pod="openshift-marketplace/community-operators-pdmm2" Jan 23 08:08:44 crc kubenswrapper[4937]: I0123 08:08:44.544883 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d04aacd-aa44-4559-b72b-4ed9c4762bc0-utilities\") pod \"community-operators-pdmm2\" (UID: \"6d04aacd-aa44-4559-b72b-4ed9c4762bc0\") " pod="openshift-marketplace/community-operators-pdmm2" Jan 23 08:08:44 crc kubenswrapper[4937]: I0123 08:08:44.568504 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kn9r\" (UniqueName: \"kubernetes.io/projected/6d04aacd-aa44-4559-b72b-4ed9c4762bc0-kube-api-access-8kn9r\") pod \"community-operators-pdmm2\" (UID: \"6d04aacd-aa44-4559-b72b-4ed9c4762bc0\") " pod="openshift-marketplace/community-operators-pdmm2" Jan 23 08:08:44 crc kubenswrapper[4937]: I0123 08:08:44.645418 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdmm2" Jan 23 08:08:45 crc kubenswrapper[4937]: I0123 08:08:45.153231 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pdmm2"] Jan 23 08:08:45 crc kubenswrapper[4937]: I0123 08:08:45.588423 4937 generic.go:334] "Generic (PLEG): container finished" podID="6d04aacd-aa44-4559-b72b-4ed9c4762bc0" containerID="779d31efd747aebdd0d55c2e089eab190e8eca38d2fd5cd7c927c263b93d7ed1" exitCode=0 Jan 23 08:08:45 crc kubenswrapper[4937]: I0123 08:08:45.588534 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdmm2" event={"ID":"6d04aacd-aa44-4559-b72b-4ed9c4762bc0","Type":"ContainerDied","Data":"779d31efd747aebdd0d55c2e089eab190e8eca38d2fd5cd7c927c263b93d7ed1"} Jan 23 08:08:45 crc kubenswrapper[4937]: I0123 08:08:45.588795 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdmm2" event={"ID":"6d04aacd-aa44-4559-b72b-4ed9c4762bc0","Type":"ContainerStarted","Data":"35e8a098ddb2afcc66ddea49cc3ded9368791559eb3cce531b28941bd50ef9df"} Jan 23 08:08:46 crc kubenswrapper[4937]: I0123 08:08:46.601194 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdmm2" event={"ID":"6d04aacd-aa44-4559-b72b-4ed9c4762bc0","Type":"ContainerStarted","Data":"35d7d27b31c99bc3861f542925f9745da6ba2d073b14e2f68d8636d77b5fa4fc"} Jan 23 08:08:47 crc kubenswrapper[4937]: I0123 08:08:47.610052 4937 generic.go:334] "Generic (PLEG): container finished" podID="6d04aacd-aa44-4559-b72b-4ed9c4762bc0" containerID="35d7d27b31c99bc3861f542925f9745da6ba2d073b14e2f68d8636d77b5fa4fc" exitCode=0 Jan 23 08:08:47 crc kubenswrapper[4937]: I0123 08:08:47.610396 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdmm2" event={"ID":"6d04aacd-aa44-4559-b72b-4ed9c4762bc0","Type":"ContainerDied","Data":"35d7d27b31c99bc3861f542925f9745da6ba2d073b14e2f68d8636d77b5fa4fc"} Jan 23 08:08:48 crc kubenswrapper[4937]: I0123 08:08:48.621739 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdmm2" event={"ID":"6d04aacd-aa44-4559-b72b-4ed9c4762bc0","Type":"ContainerStarted","Data":"2794e0b0ed7d1bc77d0a3ae4678479b12ec525acd68b02ecc577129836722f07"} Jan 23 08:08:48 crc kubenswrapper[4937]: I0123 08:08:48.651431 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pdmm2" podStartSLOduration=2.182106003 podStartE2EDuration="4.651411023s" podCreationTimestamp="2026-01-23 08:08:44 +0000 UTC" firstStartedPulling="2026-01-23 08:08:45.590501116 +0000 UTC m=+5725.394267769" lastFinishedPulling="2026-01-23 08:08:48.059806126 +0000 UTC m=+5727.863572789" observedRunningTime="2026-01-23 08:08:48.642512532 +0000 UTC m=+5728.446279195" watchObservedRunningTime="2026-01-23 08:08:48.651411023 +0000 UTC m=+5728.455177676" Jan 23 08:08:54 crc kubenswrapper[4937]: I0123 08:08:54.646076 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pdmm2" Jan 23 08:08:54 crc kubenswrapper[4937]: I0123 08:08:54.646641 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pdmm2" Jan 23 08:08:54 crc kubenswrapper[4937]: I0123 08:08:54.697354 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pdmm2" Jan 23 08:08:54 crc kubenswrapper[4937]: I0123 08:08:54.749670 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pdmm2" Jan 23 08:08:54 crc kubenswrapper[4937]: I0123 08:08:54.934085 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pdmm2"] Jan 23 08:08:56 crc kubenswrapper[4937]: I0123 08:08:56.694252 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pdmm2" podUID="6d04aacd-aa44-4559-b72b-4ed9c4762bc0" containerName="registry-server" containerID="cri-o://2794e0b0ed7d1bc77d0a3ae4678479b12ec525acd68b02ecc577129836722f07" gracePeriod=2 Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.477573 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdmm2" Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.642810 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d04aacd-aa44-4559-b72b-4ed9c4762bc0-utilities\") pod \"6d04aacd-aa44-4559-b72b-4ed9c4762bc0\" (UID: \"6d04aacd-aa44-4559-b72b-4ed9c4762bc0\") " Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.642928 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d04aacd-aa44-4559-b72b-4ed9c4762bc0-catalog-content\") pod \"6d04aacd-aa44-4559-b72b-4ed9c4762bc0\" (UID: \"6d04aacd-aa44-4559-b72b-4ed9c4762bc0\") " Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.643109 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kn9r\" (UniqueName: \"kubernetes.io/projected/6d04aacd-aa44-4559-b72b-4ed9c4762bc0-kube-api-access-8kn9r\") pod \"6d04aacd-aa44-4559-b72b-4ed9c4762bc0\" (UID: \"6d04aacd-aa44-4559-b72b-4ed9c4762bc0\") " Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.644110 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d04aacd-aa44-4559-b72b-4ed9c4762bc0-utilities" (OuterVolumeSpecName: "utilities") pod "6d04aacd-aa44-4559-b72b-4ed9c4762bc0" (UID: "6d04aacd-aa44-4559-b72b-4ed9c4762bc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.655889 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d04aacd-aa44-4559-b72b-4ed9c4762bc0-kube-api-access-8kn9r" (OuterVolumeSpecName: "kube-api-access-8kn9r") pod "6d04aacd-aa44-4559-b72b-4ed9c4762bc0" (UID: "6d04aacd-aa44-4559-b72b-4ed9c4762bc0"). InnerVolumeSpecName "kube-api-access-8kn9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.698023 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d04aacd-aa44-4559-b72b-4ed9c4762bc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d04aacd-aa44-4559-b72b-4ed9c4762bc0" (UID: "6d04aacd-aa44-4559-b72b-4ed9c4762bc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.705297 4937 generic.go:334] "Generic (PLEG): container finished" podID="6d04aacd-aa44-4559-b72b-4ed9c4762bc0" containerID="2794e0b0ed7d1bc77d0a3ae4678479b12ec525acd68b02ecc577129836722f07" exitCode=0 Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.705357 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdmm2" event={"ID":"6d04aacd-aa44-4559-b72b-4ed9c4762bc0","Type":"ContainerDied","Data":"2794e0b0ed7d1bc77d0a3ae4678479b12ec525acd68b02ecc577129836722f07"} Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.705399 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdmm2" event={"ID":"6d04aacd-aa44-4559-b72b-4ed9c4762bc0","Type":"ContainerDied","Data":"35e8a098ddb2afcc66ddea49cc3ded9368791559eb3cce531b28941bd50ef9df"} Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.705426 4937 scope.go:117] "RemoveContainer" containerID="2794e0b0ed7d1bc77d0a3ae4678479b12ec525acd68b02ecc577129836722f07" Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.705364 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdmm2" Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.726111 4937 scope.go:117] "RemoveContainer" containerID="35d7d27b31c99bc3861f542925f9745da6ba2d073b14e2f68d8636d77b5fa4fc" Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.742967 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pdmm2"] Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.745560 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kn9r\" (UniqueName: \"kubernetes.io/projected/6d04aacd-aa44-4559-b72b-4ed9c4762bc0-kube-api-access-8kn9r\") on node \"crc\" DevicePath \"\"" Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.745622 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d04aacd-aa44-4559-b72b-4ed9c4762bc0-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.745637 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d04aacd-aa44-4559-b72b-4ed9c4762bc0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.755632 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pdmm2"] Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.794783 4937 scope.go:117] "RemoveContainer" containerID="779d31efd747aebdd0d55c2e089eab190e8eca38d2fd5cd7c927c263b93d7ed1" Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.816222 4937 scope.go:117] "RemoveContainer" containerID="2794e0b0ed7d1bc77d0a3ae4678479b12ec525acd68b02ecc577129836722f07" Jan 23 08:08:57 crc kubenswrapper[4937]: E0123 08:08:57.816684 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2794e0b0ed7d1bc77d0a3ae4678479b12ec525acd68b02ecc577129836722f07\": container with ID starting with 2794e0b0ed7d1bc77d0a3ae4678479b12ec525acd68b02ecc577129836722f07 not found: ID does not exist" containerID="2794e0b0ed7d1bc77d0a3ae4678479b12ec525acd68b02ecc577129836722f07" Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.816724 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2794e0b0ed7d1bc77d0a3ae4678479b12ec525acd68b02ecc577129836722f07"} err="failed to get container status \"2794e0b0ed7d1bc77d0a3ae4678479b12ec525acd68b02ecc577129836722f07\": rpc error: code = NotFound desc = could not find container \"2794e0b0ed7d1bc77d0a3ae4678479b12ec525acd68b02ecc577129836722f07\": container with ID starting with 2794e0b0ed7d1bc77d0a3ae4678479b12ec525acd68b02ecc577129836722f07 not found: ID does not exist" Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.816751 4937 scope.go:117] "RemoveContainer" containerID="35d7d27b31c99bc3861f542925f9745da6ba2d073b14e2f68d8636d77b5fa4fc" Jan 23 08:08:57 crc kubenswrapper[4937]: E0123 08:08:57.817017 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35d7d27b31c99bc3861f542925f9745da6ba2d073b14e2f68d8636d77b5fa4fc\": container with ID starting with 35d7d27b31c99bc3861f542925f9745da6ba2d073b14e2f68d8636d77b5fa4fc not found: ID does not exist" containerID="35d7d27b31c99bc3861f542925f9745da6ba2d073b14e2f68d8636d77b5fa4fc" Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.817069 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35d7d27b31c99bc3861f542925f9745da6ba2d073b14e2f68d8636d77b5fa4fc"} err="failed to get container status \"35d7d27b31c99bc3861f542925f9745da6ba2d073b14e2f68d8636d77b5fa4fc\": rpc error: code = NotFound desc = could not find container \"35d7d27b31c99bc3861f542925f9745da6ba2d073b14e2f68d8636d77b5fa4fc\": container with ID starting with 35d7d27b31c99bc3861f542925f9745da6ba2d073b14e2f68d8636d77b5fa4fc not found: ID does not exist" Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.817086 4937 scope.go:117] "RemoveContainer" containerID="779d31efd747aebdd0d55c2e089eab190e8eca38d2fd5cd7c927c263b93d7ed1" Jan 23 08:08:57 crc kubenswrapper[4937]: E0123 08:08:57.817311 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"779d31efd747aebdd0d55c2e089eab190e8eca38d2fd5cd7c927c263b93d7ed1\": container with ID starting with 779d31efd747aebdd0d55c2e089eab190e8eca38d2fd5cd7c927c263b93d7ed1 not found: ID does not exist" containerID="779d31efd747aebdd0d55c2e089eab190e8eca38d2fd5cd7c927c263b93d7ed1" Jan 23 08:08:57 crc kubenswrapper[4937]: I0123 08:08:57.817341 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"779d31efd747aebdd0d55c2e089eab190e8eca38d2fd5cd7c927c263b93d7ed1"} err="failed to get container status \"779d31efd747aebdd0d55c2e089eab190e8eca38d2fd5cd7c927c263b93d7ed1\": rpc error: code = NotFound desc = could not find container \"779d31efd747aebdd0d55c2e089eab190e8eca38d2fd5cd7c927c263b93d7ed1\": container with ID starting with 779d31efd747aebdd0d55c2e089eab190e8eca38d2fd5cd7c927c263b93d7ed1 not found: ID does not exist" Jan 23 08:08:58 crc kubenswrapper[4937]: I0123 08:08:58.537514 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d04aacd-aa44-4559-b72b-4ed9c4762bc0" path="/var/lib/kubelet/pods/6d04aacd-aa44-4559-b72b-4ed9c4762bc0/volumes" Jan 23 08:09:37 crc kubenswrapper[4937]: I0123 08:09:37.724023 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:09:37 crc kubenswrapper[4937]: I0123 08:09:37.724688 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:10:07 crc kubenswrapper[4937]: I0123 08:10:07.724169 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:10:07 crc kubenswrapper[4937]: I0123 08:10:07.724705 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:10:37 crc kubenswrapper[4937]: I0123 08:10:37.724097 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:10:37 crc kubenswrapper[4937]: I0123 08:10:37.724526 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:10:37 crc kubenswrapper[4937]: I0123 08:10:37.724572 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 08:10:38 crc kubenswrapper[4937]: I0123 08:10:38.542473 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30a2a07eaf2a2724f82eb3b0ae80dbd8a7c9690d9e2b1a0374ce255e1044ecea"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:10:38 crc kubenswrapper[4937]: I0123 08:10:38.542566 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://30a2a07eaf2a2724f82eb3b0ae80dbd8a7c9690d9e2b1a0374ce255e1044ecea" gracePeriod=600 Jan 23 08:10:39 crc kubenswrapper[4937]: I0123 08:10:39.551502 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="30a2a07eaf2a2724f82eb3b0ae80dbd8a7c9690d9e2b1a0374ce255e1044ecea" exitCode=0 Jan 23 08:10:39 crc kubenswrapper[4937]: I0123 08:10:39.551579 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"30a2a07eaf2a2724f82eb3b0ae80dbd8a7c9690d9e2b1a0374ce255e1044ecea"} Jan 23 08:10:39 crc kubenswrapper[4937]: I0123 08:10:39.552017 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee"} Jan 23 08:10:39 crc kubenswrapper[4937]: I0123 08:10:39.552040 4937 scope.go:117] "RemoveContainer" containerID="ca2e9b4f7de9089a610dca2fad6d694d2b9d7860bad51aea6dea1dfed93920ae" Jan 23 08:11:15 crc kubenswrapper[4937]: I0123 08:11:15.952585 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zpp85"] Jan 23 08:11:15 crc kubenswrapper[4937]: E0123 08:11:15.953472 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d04aacd-aa44-4559-b72b-4ed9c4762bc0" containerName="extract-utilities" Jan 23 08:11:15 crc kubenswrapper[4937]: I0123 08:11:15.953484 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d04aacd-aa44-4559-b72b-4ed9c4762bc0" containerName="extract-utilities" Jan 23 08:11:15 crc kubenswrapper[4937]: E0123 08:11:15.953510 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d04aacd-aa44-4559-b72b-4ed9c4762bc0" containerName="extract-content" Jan 23 08:11:15 crc kubenswrapper[4937]: I0123 08:11:15.953515 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d04aacd-aa44-4559-b72b-4ed9c4762bc0" containerName="extract-content" Jan 23 08:11:15 crc kubenswrapper[4937]: E0123 08:11:15.953522 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d04aacd-aa44-4559-b72b-4ed9c4762bc0" containerName="registry-server" Jan 23 08:11:15 crc kubenswrapper[4937]: I0123 08:11:15.953528 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d04aacd-aa44-4559-b72b-4ed9c4762bc0" containerName="registry-server" Jan 23 08:11:15 crc kubenswrapper[4937]: I0123 08:11:15.953788 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d04aacd-aa44-4559-b72b-4ed9c4762bc0" containerName="registry-server" Jan 23 08:11:15 crc kubenswrapper[4937]: I0123 08:11:15.955244 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpp85" Jan 23 08:11:15 crc kubenswrapper[4937]: I0123 08:11:15.974995 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zpp85"] Jan 23 08:11:16 crc kubenswrapper[4937]: I0123 08:11:16.067429 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-978j8\" (UniqueName: \"kubernetes.io/projected/5b6fa08e-8033-472d-9c2c-b5487572f671-kube-api-access-978j8\") pod \"certified-operators-zpp85\" (UID: \"5b6fa08e-8033-472d-9c2c-b5487572f671\") " pod="openshift-marketplace/certified-operators-zpp85" Jan 23 08:11:16 crc kubenswrapper[4937]: I0123 08:11:16.067485 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b6fa08e-8033-472d-9c2c-b5487572f671-utilities\") pod \"certified-operators-zpp85\" (UID: \"5b6fa08e-8033-472d-9c2c-b5487572f671\") " pod="openshift-marketplace/certified-operators-zpp85" Jan 23 08:11:16 crc kubenswrapper[4937]: I0123 08:11:16.067510 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b6fa08e-8033-472d-9c2c-b5487572f671-catalog-content\") pod \"certified-operators-zpp85\" (UID: \"5b6fa08e-8033-472d-9c2c-b5487572f671\") " pod="openshift-marketplace/certified-operators-zpp85" Jan 23 08:11:16 crc kubenswrapper[4937]: I0123 08:11:16.169718 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-978j8\" (UniqueName: \"kubernetes.io/projected/5b6fa08e-8033-472d-9c2c-b5487572f671-kube-api-access-978j8\") pod \"certified-operators-zpp85\" (UID: \"5b6fa08e-8033-472d-9c2c-b5487572f671\") " pod="openshift-marketplace/certified-operators-zpp85" Jan 23 08:11:16 crc kubenswrapper[4937]: I0123 08:11:16.169795 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b6fa08e-8033-472d-9c2c-b5487572f671-utilities\") pod \"certified-operators-zpp85\" (UID: \"5b6fa08e-8033-472d-9c2c-b5487572f671\") " pod="openshift-marketplace/certified-operators-zpp85" Jan 23 08:11:16 crc kubenswrapper[4937]: I0123 08:11:16.169819 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b6fa08e-8033-472d-9c2c-b5487572f671-catalog-content\") pod \"certified-operators-zpp85\" (UID: \"5b6fa08e-8033-472d-9c2c-b5487572f671\") " pod="openshift-marketplace/certified-operators-zpp85" Jan 23 08:11:16 crc kubenswrapper[4937]: I0123 08:11:16.170298 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b6fa08e-8033-472d-9c2c-b5487572f671-utilities\") pod \"certified-operators-zpp85\" (UID: \"5b6fa08e-8033-472d-9c2c-b5487572f671\") " pod="openshift-marketplace/certified-operators-zpp85" Jan 23 08:11:16 crc kubenswrapper[4937]: I0123 08:11:16.170418 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b6fa08e-8033-472d-9c2c-b5487572f671-catalog-content\") pod \"certified-operators-zpp85\" (UID: \"5b6fa08e-8033-472d-9c2c-b5487572f671\") " pod="openshift-marketplace/certified-operators-zpp85" Jan 23 08:11:16 crc kubenswrapper[4937]: I0123 08:11:16.198040 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-978j8\" (UniqueName: \"kubernetes.io/projected/5b6fa08e-8033-472d-9c2c-b5487572f671-kube-api-access-978j8\") pod \"certified-operators-zpp85\" (UID: \"5b6fa08e-8033-472d-9c2c-b5487572f671\") " pod="openshift-marketplace/certified-operators-zpp85" Jan 23 08:11:16 crc kubenswrapper[4937]: I0123 08:11:16.272462 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpp85" Jan 23 08:11:16 crc kubenswrapper[4937]: I0123 08:11:16.925573 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zpp85"] Jan 23 08:11:17 crc kubenswrapper[4937]: I0123 08:11:17.904992 4937 generic.go:334] "Generic (PLEG): container finished" podID="5b6fa08e-8033-472d-9c2c-b5487572f671" containerID="da2052aca74076bddc414d6c6782d52c4b8ab183c468e20f5144aa9202395ae7" exitCode=0 Jan 23 08:11:17 crc kubenswrapper[4937]: I0123 08:11:17.905068 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpp85" event={"ID":"5b6fa08e-8033-472d-9c2c-b5487572f671","Type":"ContainerDied","Data":"da2052aca74076bddc414d6c6782d52c4b8ab183c468e20f5144aa9202395ae7"} Jan 23 08:11:17 crc kubenswrapper[4937]: I0123 08:11:17.905296 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpp85" event={"ID":"5b6fa08e-8033-472d-9c2c-b5487572f671","Type":"ContainerStarted","Data":"5861be62bf13ff76791d11523aa0c52af2165fa77ff169e059e981f1190f7a80"} Jan 23 08:11:18 crc kubenswrapper[4937]: I0123 08:11:18.917676 4937 generic.go:334] "Generic (PLEG): container finished" podID="5b6fa08e-8033-472d-9c2c-b5487572f671" containerID="fa2db731131bb6467a198ef797152afd9705c1767e62666d115d2f2ee8daf93a" exitCode=0 Jan 23 08:11:18 crc kubenswrapper[4937]: I0123 08:11:18.917777 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpp85" event={"ID":"5b6fa08e-8033-472d-9c2c-b5487572f671","Type":"ContainerDied","Data":"fa2db731131bb6467a198ef797152afd9705c1767e62666d115d2f2ee8daf93a"} Jan 23 08:11:19 crc kubenswrapper[4937]: I0123 08:11:19.929209 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpp85" event={"ID":"5b6fa08e-8033-472d-9c2c-b5487572f671","Type":"ContainerStarted","Data":"775a46c5cac44222b8dbaadf144c19a2b5882a74ad0da75bd8b4b7dffacbb8ee"} Jan 23 08:11:19 crc kubenswrapper[4937]: I0123 08:11:19.957274 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zpp85" podStartSLOduration=3.472523807 podStartE2EDuration="4.957250273s" podCreationTimestamp="2026-01-23 08:11:15 +0000 UTC" firstStartedPulling="2026-01-23 08:11:17.908002947 +0000 UTC m=+5877.711769600" lastFinishedPulling="2026-01-23 08:11:19.392729413 +0000 UTC m=+5879.196496066" observedRunningTime="2026-01-23 08:11:19.956116773 +0000 UTC m=+5879.759883426" watchObservedRunningTime="2026-01-23 08:11:19.957250273 +0000 UTC m=+5879.761016926" Jan 23 08:11:26 crc kubenswrapper[4937]: I0123 08:11:26.273194 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zpp85" Jan 23 08:11:26 crc kubenswrapper[4937]: I0123 08:11:26.273814 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zpp85" Jan 23 08:11:26 crc kubenswrapper[4937]: I0123 08:11:26.369627 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zpp85" Jan 23 08:11:27 crc kubenswrapper[4937]: I0123 08:11:27.053764 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zpp85" Jan 23 08:11:27 crc kubenswrapper[4937]: I0123 08:11:27.101915 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zpp85"] Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.013709 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hhhnl"] Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.016494 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhhnl" Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.020315 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zpp85" podUID="5b6fa08e-8033-472d-9c2c-b5487572f671" containerName="registry-server" containerID="cri-o://775a46c5cac44222b8dbaadf144c19a2b5882a74ad0da75bd8b4b7dffacbb8ee" gracePeriod=2 Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.028195 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hhhnl"] Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.180543 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2927\" (UniqueName: \"kubernetes.io/projected/4e1b2910-017b-4771-a6cd-2fae5bbefea4-kube-api-access-q2927\") pod \"redhat-operators-hhhnl\" (UID: \"4e1b2910-017b-4771-a6cd-2fae5bbefea4\") " pod="openshift-marketplace/redhat-operators-hhhnl" Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.180612 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1b2910-017b-4771-a6cd-2fae5bbefea4-catalog-content\") pod \"redhat-operators-hhhnl\" (UID: \"4e1b2910-017b-4771-a6cd-2fae5bbefea4\") " pod="openshift-marketplace/redhat-operators-hhhnl" Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.180807 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1b2910-017b-4771-a6cd-2fae5bbefea4-utilities\") pod \"redhat-operators-hhhnl\" (UID: \"4e1b2910-017b-4771-a6cd-2fae5bbefea4\") " pod="openshift-marketplace/redhat-operators-hhhnl" Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.282530 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2927\" (UniqueName: \"kubernetes.io/projected/4e1b2910-017b-4771-a6cd-2fae5bbefea4-kube-api-access-q2927\") pod \"redhat-operators-hhhnl\" (UID: \"4e1b2910-017b-4771-a6cd-2fae5bbefea4\") " pod="openshift-marketplace/redhat-operators-hhhnl" Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.282901 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1b2910-017b-4771-a6cd-2fae5bbefea4-catalog-content\") pod \"redhat-operators-hhhnl\" (UID: \"4e1b2910-017b-4771-a6cd-2fae5bbefea4\") " pod="openshift-marketplace/redhat-operators-hhhnl" Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.282974 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1b2910-017b-4771-a6cd-2fae5bbefea4-utilities\") pod \"redhat-operators-hhhnl\" (UID: \"4e1b2910-017b-4771-a6cd-2fae5bbefea4\") " pod="openshift-marketplace/redhat-operators-hhhnl" Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.283580 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1b2910-017b-4771-a6cd-2fae5bbefea4-catalog-content\") pod \"redhat-operators-hhhnl\" (UID: \"4e1b2910-017b-4771-a6cd-2fae5bbefea4\") " pod="openshift-marketplace/redhat-operators-hhhnl" Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.284001 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1b2910-017b-4771-a6cd-2fae5bbefea4-utilities\") pod \"redhat-operators-hhhnl\" (UID: \"4e1b2910-017b-4771-a6cd-2fae5bbefea4\") " pod="openshift-marketplace/redhat-operators-hhhnl" Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.302569 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2927\" (UniqueName: \"kubernetes.io/projected/4e1b2910-017b-4771-a6cd-2fae5bbefea4-kube-api-access-q2927\") pod \"redhat-operators-hhhnl\" (UID: \"4e1b2910-017b-4771-a6cd-2fae5bbefea4\") " pod="openshift-marketplace/redhat-operators-hhhnl" Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.345321 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhhnl" Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.567004 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpp85" Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.690990 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b6fa08e-8033-472d-9c2c-b5487572f671-catalog-content\") pod \"5b6fa08e-8033-472d-9c2c-b5487572f671\" (UID: \"5b6fa08e-8033-472d-9c2c-b5487572f671\") " Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.691100 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b6fa08e-8033-472d-9c2c-b5487572f671-utilities\") pod \"5b6fa08e-8033-472d-9c2c-b5487572f671\" (UID: \"5b6fa08e-8033-472d-9c2c-b5487572f671\") " Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.691133 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-978j8\" (UniqueName: \"kubernetes.io/projected/5b6fa08e-8033-472d-9c2c-b5487572f671-kube-api-access-978j8\") pod \"5b6fa08e-8033-472d-9c2c-b5487572f671\" (UID: \"5b6fa08e-8033-472d-9c2c-b5487572f671\") " Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.692113 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b6fa08e-8033-472d-9c2c-b5487572f671-utilities" (OuterVolumeSpecName: "utilities") pod "5b6fa08e-8033-472d-9c2c-b5487572f671" (UID: "5b6fa08e-8033-472d-9c2c-b5487572f671"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.693727 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b6fa08e-8033-472d-9c2c-b5487572f671-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.696963 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b6fa08e-8033-472d-9c2c-b5487572f671-kube-api-access-978j8" (OuterVolumeSpecName: "kube-api-access-978j8") pod "5b6fa08e-8033-472d-9c2c-b5487572f671" (UID: "5b6fa08e-8033-472d-9c2c-b5487572f671"). InnerVolumeSpecName "kube-api-access-978j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.756003 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b6fa08e-8033-472d-9c2c-b5487572f671-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b6fa08e-8033-472d-9c2c-b5487572f671" (UID: "5b6fa08e-8033-472d-9c2c-b5487572f671"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.795643 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b6fa08e-8033-472d-9c2c-b5487572f671-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.796009 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-978j8\" (UniqueName: \"kubernetes.io/projected/5b6fa08e-8033-472d-9c2c-b5487572f671-kube-api-access-978j8\") on node \"crc\" DevicePath \"\"" Jan 23 08:11:29 crc kubenswrapper[4937]: I0123 08:11:29.915391 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hhhnl"] Jan 23 08:11:30 crc kubenswrapper[4937]: I0123 08:11:30.036165 4937 generic.go:334] "Generic (PLEG): container finished" podID="5b6fa08e-8033-472d-9c2c-b5487572f671" containerID="775a46c5cac44222b8dbaadf144c19a2b5882a74ad0da75bd8b4b7dffacbb8ee" exitCode=0 Jan 23 08:11:30 crc kubenswrapper[4937]: I0123 08:11:30.036232 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpp85" Jan 23 08:11:30 crc kubenswrapper[4937]: I0123 08:11:30.036249 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpp85" event={"ID":"5b6fa08e-8033-472d-9c2c-b5487572f671","Type":"ContainerDied","Data":"775a46c5cac44222b8dbaadf144c19a2b5882a74ad0da75bd8b4b7dffacbb8ee"} Jan 23 08:11:30 crc kubenswrapper[4937]: I0123 08:11:30.036720 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpp85" event={"ID":"5b6fa08e-8033-472d-9c2c-b5487572f671","Type":"ContainerDied","Data":"5861be62bf13ff76791d11523aa0c52af2165fa77ff169e059e981f1190f7a80"} Jan 23 08:11:30 crc kubenswrapper[4937]: I0123 08:11:30.036747 4937 scope.go:117] "RemoveContainer" containerID="775a46c5cac44222b8dbaadf144c19a2b5882a74ad0da75bd8b4b7dffacbb8ee" Jan 23 08:11:30 crc kubenswrapper[4937]: I0123 08:11:30.041403 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhhnl" event={"ID":"4e1b2910-017b-4771-a6cd-2fae5bbefea4","Type":"ContainerStarted","Data":"bd328a076926e5fd2d83036ff3ac88132a33e1ad010a86110ffed3deafbb0fb9"} Jan 23 08:11:30 crc kubenswrapper[4937]: I0123 08:11:30.079829 4937 scope.go:117] "RemoveContainer" containerID="fa2db731131bb6467a198ef797152afd9705c1767e62666d115d2f2ee8daf93a" Jan 23 08:11:30 crc kubenswrapper[4937]: I0123 08:11:30.085680 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zpp85"] Jan 23 08:11:30 crc kubenswrapper[4937]: I0123 08:11:30.098271 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zpp85"] Jan 23 08:11:30 crc kubenswrapper[4937]: I0123 08:11:30.104893 4937 scope.go:117] "RemoveContainer" containerID="da2052aca74076bddc414d6c6782d52c4b8ab183c468e20f5144aa9202395ae7" Jan 23 08:11:30 crc kubenswrapper[4937]: I0123 08:11:30.187844 4937 scope.go:117] "RemoveContainer" containerID="775a46c5cac44222b8dbaadf144c19a2b5882a74ad0da75bd8b4b7dffacbb8ee" Jan 23 08:11:30 crc kubenswrapper[4937]: E0123 08:11:30.188395 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"775a46c5cac44222b8dbaadf144c19a2b5882a74ad0da75bd8b4b7dffacbb8ee\": container with ID starting with 775a46c5cac44222b8dbaadf144c19a2b5882a74ad0da75bd8b4b7dffacbb8ee not found: ID does not exist" containerID="775a46c5cac44222b8dbaadf144c19a2b5882a74ad0da75bd8b4b7dffacbb8ee" Jan 23 08:11:30 crc kubenswrapper[4937]: I0123 08:11:30.188429 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"775a46c5cac44222b8dbaadf144c19a2b5882a74ad0da75bd8b4b7dffacbb8ee"} err="failed to get container status \"775a46c5cac44222b8dbaadf144c19a2b5882a74ad0da75bd8b4b7dffacbb8ee\": rpc error: code = NotFound desc = could not find container \"775a46c5cac44222b8dbaadf144c19a2b5882a74ad0da75bd8b4b7dffacbb8ee\": container with ID starting with 775a46c5cac44222b8dbaadf144c19a2b5882a74ad0da75bd8b4b7dffacbb8ee not found: ID does not exist" Jan 23 08:11:30 crc kubenswrapper[4937]: I0123 08:11:30.188454 4937 scope.go:117] "RemoveContainer" containerID="fa2db731131bb6467a198ef797152afd9705c1767e62666d115d2f2ee8daf93a" Jan 23 08:11:30 crc kubenswrapper[4937]: E0123 08:11:30.188788 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa2db731131bb6467a198ef797152afd9705c1767e62666d115d2f2ee8daf93a\": container with ID starting with fa2db731131bb6467a198ef797152afd9705c1767e62666d115d2f2ee8daf93a not found: ID does not exist" containerID="fa2db731131bb6467a198ef797152afd9705c1767e62666d115d2f2ee8daf93a" Jan 23 08:11:30 crc kubenswrapper[4937]: I0123 08:11:30.188806 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2db731131bb6467a198ef797152afd9705c1767e62666d115d2f2ee8daf93a"} err="failed to get container status \"fa2db731131bb6467a198ef797152afd9705c1767e62666d115d2f2ee8daf93a\": rpc error: code = NotFound desc = could not find container \"fa2db731131bb6467a198ef797152afd9705c1767e62666d115d2f2ee8daf93a\": container with ID starting with fa2db731131bb6467a198ef797152afd9705c1767e62666d115d2f2ee8daf93a not found: ID does not exist" Jan 23 08:11:30 crc kubenswrapper[4937]: I0123 08:11:30.188821 4937 scope.go:117] "RemoveContainer" containerID="da2052aca74076bddc414d6c6782d52c4b8ab183c468e20f5144aa9202395ae7" Jan 23 08:11:30 crc kubenswrapper[4937]: E0123 08:11:30.189092 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da2052aca74076bddc414d6c6782d52c4b8ab183c468e20f5144aa9202395ae7\": container with ID starting with da2052aca74076bddc414d6c6782d52c4b8ab183c468e20f5144aa9202395ae7 not found: ID does not exist" containerID="da2052aca74076bddc414d6c6782d52c4b8ab183c468e20f5144aa9202395ae7" Jan 23 08:11:30 crc kubenswrapper[4937]: I0123 08:11:30.189110 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da2052aca74076bddc414d6c6782d52c4b8ab183c468e20f5144aa9202395ae7"} err="failed to get container status \"da2052aca74076bddc414d6c6782d52c4b8ab183c468e20f5144aa9202395ae7\": rpc error: code = NotFound desc = could not find container \"da2052aca74076bddc414d6c6782d52c4b8ab183c468e20f5144aa9202395ae7\": container with ID starting with da2052aca74076bddc414d6c6782d52c4b8ab183c468e20f5144aa9202395ae7 not found: ID does not exist" Jan 23 08:11:30 crc kubenswrapper[4937]: I0123 08:11:30.537467 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b6fa08e-8033-472d-9c2c-b5487572f671" path="/var/lib/kubelet/pods/5b6fa08e-8033-472d-9c2c-b5487572f671/volumes" Jan 23 08:11:31 crc kubenswrapper[4937]: I0123 08:11:31.053177 4937 generic.go:334] "Generic (PLEG): container finished" podID="4e1b2910-017b-4771-a6cd-2fae5bbefea4" containerID="ccb87e7d376e1e9268edaa73f5ab25735a1768e2f98cc6c8c627b7f786bb42fb" exitCode=0 Jan 23 08:11:31 crc kubenswrapper[4937]: I0123 08:11:31.053233 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhhnl" event={"ID":"4e1b2910-017b-4771-a6cd-2fae5bbefea4","Type":"ContainerDied","Data":"ccb87e7d376e1e9268edaa73f5ab25735a1768e2f98cc6c8c627b7f786bb42fb"} Jan 23 08:11:32 crc kubenswrapper[4937]: I0123 08:11:32.062574 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhhnl" event={"ID":"4e1b2910-017b-4771-a6cd-2fae5bbefea4","Type":"ContainerStarted","Data":"738c8b986abdbbad7323e3f0ff5881f2b0ce14689cc3301588116c5c6c1a7bc8"} Jan 23 08:11:36 crc kubenswrapper[4937]: I0123 08:11:36.099860 4937 generic.go:334] "Generic (PLEG): container finished" podID="4e1b2910-017b-4771-a6cd-2fae5bbefea4" containerID="738c8b986abdbbad7323e3f0ff5881f2b0ce14689cc3301588116c5c6c1a7bc8" exitCode=0 Jan 23 08:11:36 crc kubenswrapper[4937]: I0123 08:11:36.099948 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhhnl" event={"ID":"4e1b2910-017b-4771-a6cd-2fae5bbefea4","Type":"ContainerDied","Data":"738c8b986abdbbad7323e3f0ff5881f2b0ce14689cc3301588116c5c6c1a7bc8"} Jan 23 08:11:37 crc kubenswrapper[4937]: I0123 08:11:37.113129 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhhnl" event={"ID":"4e1b2910-017b-4771-a6cd-2fae5bbefea4","Type":"ContainerStarted","Data":"14d230d9b0e801b29a082b038238c0885b0c868e9ff9b021f3b5b6fa938bd094"} Jan 23 08:11:37 crc kubenswrapper[4937]: I0123 08:11:37.136825 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hhhnl" podStartSLOduration=3.650398751 podStartE2EDuration="9.136808646s" podCreationTimestamp="2026-01-23 08:11:28 +0000 UTC" firstStartedPulling="2026-01-23 08:11:31.057070617 +0000 UTC m=+5890.860837290" lastFinishedPulling="2026-01-23 08:11:36.543480532 +0000 UTC m=+5896.347247185" observedRunningTime="2026-01-23 08:11:37.133900407 +0000 UTC m=+5896.937667060" watchObservedRunningTime="2026-01-23 08:11:37.136808646 +0000 UTC m=+5896.940575299" Jan 23 08:11:39 crc kubenswrapper[4937]: I0123 08:11:39.346999 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hhhnl" Jan 23 08:11:39 crc kubenswrapper[4937]: I0123 08:11:39.347358 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hhhnl" Jan 23 08:11:40 crc kubenswrapper[4937]: I0123 08:11:40.396752 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hhhnl" podUID="4e1b2910-017b-4771-a6cd-2fae5bbefea4" containerName="registry-server" probeResult="failure" output=< Jan 23 08:11:40 crc kubenswrapper[4937]: timeout: failed to connect service ":50051" within 1s Jan 23 08:11:40 crc kubenswrapper[4937]: > Jan 23 08:11:49 crc kubenswrapper[4937]: I0123 08:11:49.391668 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hhhnl" Jan 23 08:11:49 crc kubenswrapper[4937]: I0123 08:11:49.441681 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hhhnl" Jan 23 08:11:49 crc kubenswrapper[4937]: I0123 08:11:49.634536 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hhhnl"] Jan 23 08:11:51 crc kubenswrapper[4937]: I0123 08:11:51.267950 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hhhnl" podUID="4e1b2910-017b-4771-a6cd-2fae5bbefea4" containerName="registry-server" containerID="cri-o://14d230d9b0e801b29a082b038238c0885b0c868e9ff9b021f3b5b6fa938bd094" gracePeriod=2 Jan 23 08:11:51 crc kubenswrapper[4937]: I0123 08:11:51.748829 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhhnl" Jan 23 08:11:51 crc kubenswrapper[4937]: I0123 08:11:51.869090 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1b2910-017b-4771-a6cd-2fae5bbefea4-catalog-content\") pod \"4e1b2910-017b-4771-a6cd-2fae5bbefea4\" (UID: \"4e1b2910-017b-4771-a6cd-2fae5bbefea4\") " Jan 23 08:11:51 crc kubenswrapper[4937]: I0123 08:11:51.869152 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2927\" (UniqueName: \"kubernetes.io/projected/4e1b2910-017b-4771-a6cd-2fae5bbefea4-kube-api-access-q2927\") pod \"4e1b2910-017b-4771-a6cd-2fae5bbefea4\" (UID: \"4e1b2910-017b-4771-a6cd-2fae5bbefea4\") " Jan 23 08:11:51 crc kubenswrapper[4937]: I0123 08:11:51.870126 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1b2910-017b-4771-a6cd-2fae5bbefea4-utilities\") pod \"4e1b2910-017b-4771-a6cd-2fae5bbefea4\" (UID: \"4e1b2910-017b-4771-a6cd-2fae5bbefea4\") " Jan 23 08:11:51 crc kubenswrapper[4937]: I0123 08:11:51.870899 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e1b2910-017b-4771-a6cd-2fae5bbefea4-utilities" (OuterVolumeSpecName: "utilities") pod "4e1b2910-017b-4771-a6cd-2fae5bbefea4" (UID: "4e1b2910-017b-4771-a6cd-2fae5bbefea4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:11:51 crc kubenswrapper[4937]: I0123 08:11:51.874886 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e1b2910-017b-4771-a6cd-2fae5bbefea4-kube-api-access-q2927" (OuterVolumeSpecName: "kube-api-access-q2927") pod "4e1b2910-017b-4771-a6cd-2fae5bbefea4" (UID: "4e1b2910-017b-4771-a6cd-2fae5bbefea4"). InnerVolumeSpecName "kube-api-access-q2927". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:11:51 crc kubenswrapper[4937]: I0123 08:11:51.973216 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2927\" (UniqueName: \"kubernetes.io/projected/4e1b2910-017b-4771-a6cd-2fae5bbefea4-kube-api-access-q2927\") on node \"crc\" DevicePath \"\"" Jan 23 08:11:51 crc kubenswrapper[4937]: I0123 08:11:51.973504 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1b2910-017b-4771-a6cd-2fae5bbefea4-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:11:51 crc kubenswrapper[4937]: I0123 08:11:51.990222 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e1b2910-017b-4771-a6cd-2fae5bbefea4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e1b2910-017b-4771-a6cd-2fae5bbefea4" (UID: "4e1b2910-017b-4771-a6cd-2fae5bbefea4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:11:52 crc kubenswrapper[4937]: I0123 08:11:52.075314 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1b2910-017b-4771-a6cd-2fae5bbefea4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:11:52 crc kubenswrapper[4937]: I0123 08:11:52.278691 4937 generic.go:334] "Generic (PLEG): container finished" podID="4e1b2910-017b-4771-a6cd-2fae5bbefea4" containerID="14d230d9b0e801b29a082b038238c0885b0c868e9ff9b021f3b5b6fa938bd094" exitCode=0 Jan 23 08:11:52 crc kubenswrapper[4937]: I0123 08:11:52.278732 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhhnl" event={"ID":"4e1b2910-017b-4771-a6cd-2fae5bbefea4","Type":"ContainerDied","Data":"14d230d9b0e801b29a082b038238c0885b0c868e9ff9b021f3b5b6fa938bd094"} Jan 23 08:11:52 crc kubenswrapper[4937]: I0123 08:11:52.278759 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hhhnl" event={"ID":"4e1b2910-017b-4771-a6cd-2fae5bbefea4","Type":"ContainerDied","Data":"bd328a076926e5fd2d83036ff3ac88132a33e1ad010a86110ffed3deafbb0fb9"} Jan 23 08:11:52 crc kubenswrapper[4937]: I0123 08:11:52.278776 4937 scope.go:117] "RemoveContainer" containerID="14d230d9b0e801b29a082b038238c0885b0c868e9ff9b021f3b5b6fa938bd094" Jan 23 08:11:52 crc kubenswrapper[4937]: I0123 08:11:52.278805 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hhhnl" Jan 23 08:11:52 crc kubenswrapper[4937]: I0123 08:11:52.312322 4937 scope.go:117] "RemoveContainer" containerID="738c8b986abdbbad7323e3f0ff5881f2b0ce14689cc3301588116c5c6c1a7bc8" Jan 23 08:11:52 crc kubenswrapper[4937]: I0123 08:11:52.312538 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hhhnl"] Jan 23 08:11:52 crc kubenswrapper[4937]: I0123 08:11:52.329836 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hhhnl"] Jan 23 08:11:52 crc kubenswrapper[4937]: I0123 08:11:52.343445 4937 scope.go:117] "RemoveContainer" containerID="ccb87e7d376e1e9268edaa73f5ab25735a1768e2f98cc6c8c627b7f786bb42fb" Jan 23 08:11:52 crc kubenswrapper[4937]: I0123 08:11:52.378920 4937 scope.go:117] "RemoveContainer" containerID="14d230d9b0e801b29a082b038238c0885b0c868e9ff9b021f3b5b6fa938bd094" Jan 23 08:11:52 crc kubenswrapper[4937]: E0123 08:11:52.379297 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14d230d9b0e801b29a082b038238c0885b0c868e9ff9b021f3b5b6fa938bd094\": container with ID starting with 14d230d9b0e801b29a082b038238c0885b0c868e9ff9b021f3b5b6fa938bd094 not found: ID does not exist" containerID="14d230d9b0e801b29a082b038238c0885b0c868e9ff9b021f3b5b6fa938bd094" Jan 23 08:11:52 crc kubenswrapper[4937]: I0123 08:11:52.379339 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14d230d9b0e801b29a082b038238c0885b0c868e9ff9b021f3b5b6fa938bd094"} err="failed to get container status \"14d230d9b0e801b29a082b038238c0885b0c868e9ff9b021f3b5b6fa938bd094\": rpc error: code = NotFound desc = could not find container \"14d230d9b0e801b29a082b038238c0885b0c868e9ff9b021f3b5b6fa938bd094\": container with ID starting with 14d230d9b0e801b29a082b038238c0885b0c868e9ff9b021f3b5b6fa938bd094 not found: ID does not exist" Jan 23 08:11:52 crc kubenswrapper[4937]: I0123 08:11:52.379367 4937 scope.go:117] "RemoveContainer" containerID="738c8b986abdbbad7323e3f0ff5881f2b0ce14689cc3301588116c5c6c1a7bc8" Jan 23 08:11:52 crc kubenswrapper[4937]: E0123 08:11:52.379718 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"738c8b986abdbbad7323e3f0ff5881f2b0ce14689cc3301588116c5c6c1a7bc8\": container with ID starting with 738c8b986abdbbad7323e3f0ff5881f2b0ce14689cc3301588116c5c6c1a7bc8 not found: ID does not exist" containerID="738c8b986abdbbad7323e3f0ff5881f2b0ce14689cc3301588116c5c6c1a7bc8" Jan 23 08:11:52 crc kubenswrapper[4937]: I0123 08:11:52.379770 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"738c8b986abdbbad7323e3f0ff5881f2b0ce14689cc3301588116c5c6c1a7bc8"} err="failed to get container status \"738c8b986abdbbad7323e3f0ff5881f2b0ce14689cc3301588116c5c6c1a7bc8\": rpc error: code = NotFound desc = could not find container \"738c8b986abdbbad7323e3f0ff5881f2b0ce14689cc3301588116c5c6c1a7bc8\": container with ID starting with 738c8b986abdbbad7323e3f0ff5881f2b0ce14689cc3301588116c5c6c1a7bc8 not found: ID does not exist" Jan 23 08:11:52 crc kubenswrapper[4937]: I0123 08:11:52.379803 4937 scope.go:117] "RemoveContainer" containerID="ccb87e7d376e1e9268edaa73f5ab25735a1768e2f98cc6c8c627b7f786bb42fb" Jan 23 08:11:52 crc kubenswrapper[4937]: E0123 08:11:52.380171 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccb87e7d376e1e9268edaa73f5ab25735a1768e2f98cc6c8c627b7f786bb42fb\": container with ID starting with ccb87e7d376e1e9268edaa73f5ab25735a1768e2f98cc6c8c627b7f786bb42fb not found: ID does not exist" containerID="ccb87e7d376e1e9268edaa73f5ab25735a1768e2f98cc6c8c627b7f786bb42fb" Jan 23 08:11:52 crc kubenswrapper[4937]: I0123 08:11:52.380258 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccb87e7d376e1e9268edaa73f5ab25735a1768e2f98cc6c8c627b7f786bb42fb"} err="failed to get container status \"ccb87e7d376e1e9268edaa73f5ab25735a1768e2f98cc6c8c627b7f786bb42fb\": rpc error: code = NotFound desc = could not find container \"ccb87e7d376e1e9268edaa73f5ab25735a1768e2f98cc6c8c627b7f786bb42fb\": container with ID starting with ccb87e7d376e1e9268edaa73f5ab25735a1768e2f98cc6c8c627b7f786bb42fb not found: ID does not exist" Jan 23 08:11:52 crc kubenswrapper[4937]: I0123 08:11:52.541071 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e1b2910-017b-4771-a6cd-2fae5bbefea4" path="/var/lib/kubelet/pods/4e1b2910-017b-4771-a6cd-2fae5bbefea4/volumes" Jan 23 08:13:07 crc kubenswrapper[4937]: I0123 08:13:07.724066 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:13:07 crc kubenswrapper[4937]: I0123 08:13:07.724616 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:13:37 crc kubenswrapper[4937]: I0123 08:13:37.723817 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:13:37 crc kubenswrapper[4937]: I0123 08:13:37.724370 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:14:07 crc kubenswrapper[4937]: I0123 08:14:07.724468 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:14:07 crc kubenswrapper[4937]: I0123 08:14:07.725011 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:14:07 crc kubenswrapper[4937]: I0123 08:14:07.725069 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 08:14:07 crc kubenswrapper[4937]: I0123 08:14:07.725886 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:14:07 crc kubenswrapper[4937]: I0123 08:14:07.725966 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" gracePeriod=600 Jan 23 08:14:07 crc kubenswrapper[4937]: E0123 08:14:07.852731 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:14:08 crc kubenswrapper[4937]: I0123 08:14:08.503835 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" exitCode=0 Jan 23 08:14:08 crc kubenswrapper[4937]: I0123 08:14:08.503884 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee"} Jan 23 08:14:08 crc kubenswrapper[4937]: I0123 08:14:08.503917 4937 scope.go:117] "RemoveContainer" containerID="30a2a07eaf2a2724f82eb3b0ae80dbd8a7c9690d9e2b1a0374ce255e1044ecea" Jan 23 08:14:08 crc kubenswrapper[4937]: I0123 08:14:08.504740 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:14:08 crc kubenswrapper[4937]: E0123 08:14:08.505063 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:14:22 crc kubenswrapper[4937]: I0123 08:14:22.527451 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:14:22 crc kubenswrapper[4937]: E0123 08:14:22.528345 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:14:35 crc kubenswrapper[4937]: I0123 08:14:35.526032 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:14:35 crc kubenswrapper[4937]: E0123 08:14:35.526915 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:14:49 crc kubenswrapper[4937]: I0123 08:14:49.526303 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:14:49 crc kubenswrapper[4937]: E0123 08:14:49.528869 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.160772 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l"] Jan 23 08:15:00 crc kubenswrapper[4937]: E0123 08:15:00.162042 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1b2910-017b-4771-a6cd-2fae5bbefea4" containerName="extract-utilities" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.162060 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1b2910-017b-4771-a6cd-2fae5bbefea4" containerName="extract-utilities" Jan 23 08:15:00 crc kubenswrapper[4937]: E0123 08:15:00.162076 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6fa08e-8033-472d-9c2c-b5487572f671" containerName="extract-utilities" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.162084 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6fa08e-8033-472d-9c2c-b5487572f671" containerName="extract-utilities" Jan 23 08:15:00 crc kubenswrapper[4937]: E0123 08:15:00.162092 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6fa08e-8033-472d-9c2c-b5487572f671" containerName="registry-server" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.162100 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6fa08e-8033-472d-9c2c-b5487572f671" containerName="registry-server" Jan 23 08:15:00 crc kubenswrapper[4937]: E0123 08:15:00.162113 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1b2910-017b-4771-a6cd-2fae5bbefea4" containerName="extract-content" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.162120 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1b2910-017b-4771-a6cd-2fae5bbefea4" containerName="extract-content" Jan 23 08:15:00 crc kubenswrapper[4937]: E0123 08:15:00.162143 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1b2910-017b-4771-a6cd-2fae5bbefea4" containerName="registry-server" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.162152 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1b2910-017b-4771-a6cd-2fae5bbefea4" containerName="registry-server" Jan 23 08:15:00 crc kubenswrapper[4937]: E0123 08:15:00.162176 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6fa08e-8033-472d-9c2c-b5487572f671" containerName="extract-content" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.162183 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6fa08e-8033-472d-9c2c-b5487572f671" containerName="extract-content" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.162460 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1b2910-017b-4771-a6cd-2fae5bbefea4" containerName="registry-server" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.162485 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6fa08e-8033-472d-9c2c-b5487572f671" containerName="registry-server" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.163330 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.167212 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.167287 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.169210 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l"] Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.306897 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a3eddca-934d-4c68-a69d-1b6100d1ba5e-config-volume\") pod \"collect-profiles-29485935-h954l\" (UID: \"8a3eddca-934d-4c68-a69d-1b6100d1ba5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.307015 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trff8\" (UniqueName: \"kubernetes.io/projected/8a3eddca-934d-4c68-a69d-1b6100d1ba5e-kube-api-access-trff8\") pod \"collect-profiles-29485935-h954l\" (UID: \"8a3eddca-934d-4c68-a69d-1b6100d1ba5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.307161 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a3eddca-934d-4c68-a69d-1b6100d1ba5e-secret-volume\") pod \"collect-profiles-29485935-h954l\" (UID: \"8a3eddca-934d-4c68-a69d-1b6100d1ba5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.409726 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a3eddca-934d-4c68-a69d-1b6100d1ba5e-config-volume\") pod \"collect-profiles-29485935-h954l\" (UID: \"8a3eddca-934d-4c68-a69d-1b6100d1ba5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.409800 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trff8\" (UniqueName: \"kubernetes.io/projected/8a3eddca-934d-4c68-a69d-1b6100d1ba5e-kube-api-access-trff8\") pod \"collect-profiles-29485935-h954l\" (UID: \"8a3eddca-934d-4c68-a69d-1b6100d1ba5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.409898 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a3eddca-934d-4c68-a69d-1b6100d1ba5e-secret-volume\") pod \"collect-profiles-29485935-h954l\" (UID: \"8a3eddca-934d-4c68-a69d-1b6100d1ba5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.410840 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a3eddca-934d-4c68-a69d-1b6100d1ba5e-config-volume\") pod \"collect-profiles-29485935-h954l\" (UID: \"8a3eddca-934d-4c68-a69d-1b6100d1ba5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.426797 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a3eddca-934d-4c68-a69d-1b6100d1ba5e-secret-volume\") pod \"collect-profiles-29485935-h954l\" (UID: \"8a3eddca-934d-4c68-a69d-1b6100d1ba5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.430703 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trff8\" (UniqueName: \"kubernetes.io/projected/8a3eddca-934d-4c68-a69d-1b6100d1ba5e-kube-api-access-trff8\") pod \"collect-profiles-29485935-h954l\" (UID: \"8a3eddca-934d-4c68-a69d-1b6100d1ba5e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.525919 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l" Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.973279 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l"] Jan 23 08:15:00 crc kubenswrapper[4937]: I0123 08:15:00.999887 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l" event={"ID":"8a3eddca-934d-4c68-a69d-1b6100d1ba5e","Type":"ContainerStarted","Data":"0cf615d4c905de391655da74f692fb015a5503aca8f56c4fdbc58f731d47cbd4"} Jan 23 08:15:02 crc kubenswrapper[4937]: I0123 08:15:02.010617 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l" event={"ID":"8a3eddca-934d-4c68-a69d-1b6100d1ba5e","Type":"ContainerStarted","Data":"b1a8a6a1780eabf1099382923624e6915da0f198f8d827f1ad350840cdae1dc9"} Jan 23 08:15:02 crc kubenswrapper[4937]: I0123 08:15:02.544291 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:15:02 crc kubenswrapper[4937]: E0123 08:15:02.544743 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:15:03 crc kubenswrapper[4937]: I0123 08:15:03.033835 4937 generic.go:334] "Generic (PLEG): container finished" podID="8a3eddca-934d-4c68-a69d-1b6100d1ba5e" containerID="b1a8a6a1780eabf1099382923624e6915da0f198f8d827f1ad350840cdae1dc9" exitCode=0 Jan 23 08:15:03 crc kubenswrapper[4937]: I0123 08:15:03.033899 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l" event={"ID":"8a3eddca-934d-4c68-a69d-1b6100d1ba5e","Type":"ContainerDied","Data":"b1a8a6a1780eabf1099382923624e6915da0f198f8d827f1ad350840cdae1dc9"} Jan 23 08:15:04 crc kubenswrapper[4937]: I0123 08:15:04.448620 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l" Jan 23 08:15:04 crc kubenswrapper[4937]: I0123 08:15:04.598803 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trff8\" (UniqueName: \"kubernetes.io/projected/8a3eddca-934d-4c68-a69d-1b6100d1ba5e-kube-api-access-trff8\") pod \"8a3eddca-934d-4c68-a69d-1b6100d1ba5e\" (UID: \"8a3eddca-934d-4c68-a69d-1b6100d1ba5e\") " Jan 23 08:15:04 crc kubenswrapper[4937]: I0123 08:15:04.598931 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a3eddca-934d-4c68-a69d-1b6100d1ba5e-secret-volume\") pod \"8a3eddca-934d-4c68-a69d-1b6100d1ba5e\" (UID: \"8a3eddca-934d-4c68-a69d-1b6100d1ba5e\") " Jan 23 08:15:04 crc kubenswrapper[4937]: I0123 08:15:04.599221 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a3eddca-934d-4c68-a69d-1b6100d1ba5e-config-volume\") pod \"8a3eddca-934d-4c68-a69d-1b6100d1ba5e\" (UID: \"8a3eddca-934d-4c68-a69d-1b6100d1ba5e\") " Jan 23 08:15:04 crc kubenswrapper[4937]: I0123 08:15:04.600118 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a3eddca-934d-4c68-a69d-1b6100d1ba5e-config-volume" (OuterVolumeSpecName: "config-volume") pod "8a3eddca-934d-4c68-a69d-1b6100d1ba5e" (UID: "8a3eddca-934d-4c68-a69d-1b6100d1ba5e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:04 crc kubenswrapper[4937]: I0123 08:15:04.611944 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a3eddca-934d-4c68-a69d-1b6100d1ba5e-kube-api-access-trff8" (OuterVolumeSpecName: "kube-api-access-trff8") pod "8a3eddca-934d-4c68-a69d-1b6100d1ba5e" (UID: "8a3eddca-934d-4c68-a69d-1b6100d1ba5e"). InnerVolumeSpecName "kube-api-access-trff8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:04 crc kubenswrapper[4937]: I0123 08:15:04.612423 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a3eddca-934d-4c68-a69d-1b6100d1ba5e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8a3eddca-934d-4c68-a69d-1b6100d1ba5e" (UID: "8a3eddca-934d-4c68-a69d-1b6100d1ba5e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:04 crc kubenswrapper[4937]: I0123 08:15:04.702234 4937 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a3eddca-934d-4c68-a69d-1b6100d1ba5e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:04 crc kubenswrapper[4937]: I0123 08:15:04.702561 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trff8\" (UniqueName: \"kubernetes.io/projected/8a3eddca-934d-4c68-a69d-1b6100d1ba5e-kube-api-access-trff8\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:04 crc kubenswrapper[4937]: I0123 08:15:04.702575 4937 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a3eddca-934d-4c68-a69d-1b6100d1ba5e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:05 crc kubenswrapper[4937]: I0123 08:15:05.055139 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l" event={"ID":"8a3eddca-934d-4c68-a69d-1b6100d1ba5e","Type":"ContainerDied","Data":"0cf615d4c905de391655da74f692fb015a5503aca8f56c4fdbc58f731d47cbd4"} Jan 23 08:15:05 crc kubenswrapper[4937]: I0123 08:15:05.055183 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cf615d4c905de391655da74f692fb015a5503aca8f56c4fdbc58f731d47cbd4" Jan 23 08:15:05 crc kubenswrapper[4937]: I0123 08:15:05.055316 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-h954l" Jan 23 08:15:05 crc kubenswrapper[4937]: I0123 08:15:05.529502 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct"] Jan 23 08:15:05 crc kubenswrapper[4937]: I0123 08:15:05.538283 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485890-mgdct"] Jan 23 08:15:06 crc kubenswrapper[4937]: I0123 08:15:06.542226 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="431a4044-87df-4912-ba91-8834cbba5091" path="/var/lib/kubelet/pods/431a4044-87df-4912-ba91-8834cbba5091/volumes" Jan 23 08:15:10 crc kubenswrapper[4937]: I0123 08:15:10.084452 4937 scope.go:117] "RemoveContainer" containerID="a40aa4bc4ba29678bcd240cbd82538dcee11dac2ba9699124ba892513f6164f0" Jan 23 08:15:15 crc kubenswrapper[4937]: I0123 08:15:15.529351 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:15:15 crc kubenswrapper[4937]: E0123 08:15:15.530264 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:15:28 crc kubenswrapper[4937]: I0123 08:15:28.527205 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:15:28 crc kubenswrapper[4937]: E0123 08:15:28.529244 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:15:41 crc kubenswrapper[4937]: I0123 08:15:41.526475 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:15:41 crc kubenswrapper[4937]: E0123 08:15:41.527280 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:15:55 crc kubenswrapper[4937]: I0123 08:15:55.526195 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:15:55 crc kubenswrapper[4937]: E0123 08:15:55.528079 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:16:07 crc kubenswrapper[4937]: I0123 08:16:07.526811 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:16:07 crc kubenswrapper[4937]: E0123 08:16:07.529841 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:16:22 crc kubenswrapper[4937]: I0123 08:16:22.525947 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:16:22 crc kubenswrapper[4937]: E0123 08:16:22.527023 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:16:35 crc kubenswrapper[4937]: I0123 08:16:35.527083 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:16:35 crc kubenswrapper[4937]: E0123 08:16:35.529379 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:16:47 crc kubenswrapper[4937]: I0123 08:16:47.526886 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:16:47 crc kubenswrapper[4937]: E0123 08:16:47.527849 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:16:58 crc kubenswrapper[4937]: I0123 08:16:58.527005 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:16:58 crc kubenswrapper[4937]: E0123 08:16:58.527757 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:17:08 crc kubenswrapper[4937]: I0123 08:17:08.870726 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f5nlh"] Jan 23 08:17:08 crc kubenswrapper[4937]: E0123 08:17:08.871802 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a3eddca-934d-4c68-a69d-1b6100d1ba5e" containerName="collect-profiles" Jan 23 08:17:08 crc kubenswrapper[4937]: I0123 08:17:08.871819 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a3eddca-934d-4c68-a69d-1b6100d1ba5e" containerName="collect-profiles" Jan 23 08:17:08 crc kubenswrapper[4937]: I0123 08:17:08.872028 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a3eddca-934d-4c68-a69d-1b6100d1ba5e" containerName="collect-profiles" Jan 23 08:17:08 crc kubenswrapper[4937]: I0123 08:17:08.874463 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f5nlh" Jan 23 08:17:08 crc kubenswrapper[4937]: I0123 08:17:08.895442 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f5nlh"] Jan 23 08:17:09 crc kubenswrapper[4937]: I0123 08:17:09.008279 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bdxj\" (UniqueName: \"kubernetes.io/projected/adffe427-25cd-485c-b35d-7bf52a52959b-kube-api-access-6bdxj\") pod \"redhat-marketplace-f5nlh\" (UID: \"adffe427-25cd-485c-b35d-7bf52a52959b\") " pod="openshift-marketplace/redhat-marketplace-f5nlh" Jan 23 08:17:09 crc kubenswrapper[4937]: I0123 08:17:09.008538 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adffe427-25cd-485c-b35d-7bf52a52959b-utilities\") pod \"redhat-marketplace-f5nlh\" (UID: \"adffe427-25cd-485c-b35d-7bf52a52959b\") " pod="openshift-marketplace/redhat-marketplace-f5nlh" Jan 23 08:17:09 crc kubenswrapper[4937]: I0123 08:17:09.008570 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adffe427-25cd-485c-b35d-7bf52a52959b-catalog-content\") pod \"redhat-marketplace-f5nlh\" (UID: \"adffe427-25cd-485c-b35d-7bf52a52959b\") " pod="openshift-marketplace/redhat-marketplace-f5nlh" Jan 23 08:17:09 crc kubenswrapper[4937]: I0123 08:17:09.110056 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adffe427-25cd-485c-b35d-7bf52a52959b-utilities\") pod \"redhat-marketplace-f5nlh\" (UID: \"adffe427-25cd-485c-b35d-7bf52a52959b\") " pod="openshift-marketplace/redhat-marketplace-f5nlh" Jan 23 08:17:09 crc kubenswrapper[4937]: I0123 08:17:09.110113 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adffe427-25cd-485c-b35d-7bf52a52959b-catalog-content\") pod \"redhat-marketplace-f5nlh\" (UID: \"adffe427-25cd-485c-b35d-7bf52a52959b\") " pod="openshift-marketplace/redhat-marketplace-f5nlh" Jan 23 08:17:09 crc kubenswrapper[4937]: I0123 08:17:09.110186 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bdxj\" (UniqueName: \"kubernetes.io/projected/adffe427-25cd-485c-b35d-7bf52a52959b-kube-api-access-6bdxj\") pod \"redhat-marketplace-f5nlh\" (UID: \"adffe427-25cd-485c-b35d-7bf52a52959b\") " pod="openshift-marketplace/redhat-marketplace-f5nlh" Jan 23 08:17:09 crc kubenswrapper[4937]: I0123 08:17:09.110555 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adffe427-25cd-485c-b35d-7bf52a52959b-utilities\") pod \"redhat-marketplace-f5nlh\" (UID: \"adffe427-25cd-485c-b35d-7bf52a52959b\") " pod="openshift-marketplace/redhat-marketplace-f5nlh" Jan 23 08:17:09 crc kubenswrapper[4937]: I0123 08:17:09.110715 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adffe427-25cd-485c-b35d-7bf52a52959b-catalog-content\") pod \"redhat-marketplace-f5nlh\" (UID: \"adffe427-25cd-485c-b35d-7bf52a52959b\") " pod="openshift-marketplace/redhat-marketplace-f5nlh" Jan 23 08:17:09 crc kubenswrapper[4937]: I0123 08:17:09.132713 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bdxj\" (UniqueName: \"kubernetes.io/projected/adffe427-25cd-485c-b35d-7bf52a52959b-kube-api-access-6bdxj\") pod \"redhat-marketplace-f5nlh\" (UID: \"adffe427-25cd-485c-b35d-7bf52a52959b\") " pod="openshift-marketplace/redhat-marketplace-f5nlh" Jan 23 08:17:09 crc kubenswrapper[4937]: I0123 08:17:09.195788 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f5nlh" Jan 23 08:17:09 crc kubenswrapper[4937]: I0123 08:17:09.526509 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:17:09 crc kubenswrapper[4937]: E0123 08:17:09.527185 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:17:09 crc kubenswrapper[4937]: I0123 08:17:09.668429 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f5nlh"] Jan 23 08:17:10 crc kubenswrapper[4937]: I0123 08:17:10.324636 4937 generic.go:334] "Generic (PLEG): container finished" podID="adffe427-25cd-485c-b35d-7bf52a52959b" containerID="2e5b6bd1e82a5f595259973a61eb1f3d65329933cb0debbd3ce9c245afa0d2b3" exitCode=0 Jan 23 08:17:10 crc kubenswrapper[4937]: I0123 08:17:10.324820 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f5nlh" event={"ID":"adffe427-25cd-485c-b35d-7bf52a52959b","Type":"ContainerDied","Data":"2e5b6bd1e82a5f595259973a61eb1f3d65329933cb0debbd3ce9c245afa0d2b3"} Jan 23 08:17:10 crc kubenswrapper[4937]: I0123 08:17:10.324941 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f5nlh" event={"ID":"adffe427-25cd-485c-b35d-7bf52a52959b","Type":"ContainerStarted","Data":"f59a11cd6df9053c9c2ad8c1db6e0bc90902a09bd0cb106a87b8844071c98e6f"} Jan 23 08:17:10 crc kubenswrapper[4937]: I0123 08:17:10.327488 4937 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:17:12 crc kubenswrapper[4937]: I0123 08:17:12.349867 4937 generic.go:334] "Generic (PLEG): container finished" podID="adffe427-25cd-485c-b35d-7bf52a52959b" containerID="ef99ea0f82c936cd02ea3620486066614814367c88519f1606cb3a254a673c77" exitCode=0 Jan 23 08:17:12 crc kubenswrapper[4937]: I0123 08:17:12.349954 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f5nlh" event={"ID":"adffe427-25cd-485c-b35d-7bf52a52959b","Type":"ContainerDied","Data":"ef99ea0f82c936cd02ea3620486066614814367c88519f1606cb3a254a673c77"} Jan 23 08:17:13 crc kubenswrapper[4937]: I0123 08:17:13.363357 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f5nlh" event={"ID":"adffe427-25cd-485c-b35d-7bf52a52959b","Type":"ContainerStarted","Data":"27eafaa557b3bea26021290631c598a2db4eed000729020a77e060a3c4628d20"} Jan 23 08:17:13 crc kubenswrapper[4937]: I0123 08:17:13.386850 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f5nlh" podStartSLOduration=2.663651176 podStartE2EDuration="5.38683005s" podCreationTimestamp="2026-01-23 08:17:08 +0000 UTC" firstStartedPulling="2026-01-23 08:17:10.327253386 +0000 UTC m=+6230.131020039" lastFinishedPulling="2026-01-23 08:17:13.05043226 +0000 UTC m=+6232.854198913" observedRunningTime="2026-01-23 08:17:13.380291232 +0000 UTC m=+6233.184057905" watchObservedRunningTime="2026-01-23 08:17:13.38683005 +0000 UTC m=+6233.190596693" Jan 23 08:17:19 crc kubenswrapper[4937]: I0123 08:17:19.196454 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f5nlh" Jan 23 08:17:19 crc kubenswrapper[4937]: I0123 08:17:19.197045 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f5nlh" Jan 23 08:17:19 crc kubenswrapper[4937]: I0123 08:17:19.244523 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f5nlh" Jan 23 08:17:19 crc kubenswrapper[4937]: I0123 08:17:19.515323 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f5nlh" Jan 23 08:17:19 crc kubenswrapper[4937]: I0123 08:17:19.593631 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f5nlh"] Jan 23 08:17:20 crc kubenswrapper[4937]: I0123 08:17:20.534252 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:17:20 crc kubenswrapper[4937]: E0123 08:17:20.534831 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:17:21 crc kubenswrapper[4937]: I0123 08:17:21.434742 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f5nlh" podUID="adffe427-25cd-485c-b35d-7bf52a52959b" containerName="registry-server" containerID="cri-o://27eafaa557b3bea26021290631c598a2db4eed000729020a77e060a3c4628d20" gracePeriod=2 Jan 23 08:17:21 crc kubenswrapper[4937]: I0123 08:17:21.932634 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f5nlh" Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.038941 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bdxj\" (UniqueName: \"kubernetes.io/projected/adffe427-25cd-485c-b35d-7bf52a52959b-kube-api-access-6bdxj\") pod \"adffe427-25cd-485c-b35d-7bf52a52959b\" (UID: \"adffe427-25cd-485c-b35d-7bf52a52959b\") " Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.039086 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adffe427-25cd-485c-b35d-7bf52a52959b-catalog-content\") pod \"adffe427-25cd-485c-b35d-7bf52a52959b\" (UID: \"adffe427-25cd-485c-b35d-7bf52a52959b\") " Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.039113 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adffe427-25cd-485c-b35d-7bf52a52959b-utilities\") pod \"adffe427-25cd-485c-b35d-7bf52a52959b\" (UID: \"adffe427-25cd-485c-b35d-7bf52a52959b\") " Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.040144 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adffe427-25cd-485c-b35d-7bf52a52959b-utilities" (OuterVolumeSpecName: "utilities") pod "adffe427-25cd-485c-b35d-7bf52a52959b" (UID: "adffe427-25cd-485c-b35d-7bf52a52959b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.045611 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adffe427-25cd-485c-b35d-7bf52a52959b-kube-api-access-6bdxj" (OuterVolumeSpecName: "kube-api-access-6bdxj") pod "adffe427-25cd-485c-b35d-7bf52a52959b" (UID: "adffe427-25cd-485c-b35d-7bf52a52959b"). InnerVolumeSpecName "kube-api-access-6bdxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.063729 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adffe427-25cd-485c-b35d-7bf52a52959b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adffe427-25cd-485c-b35d-7bf52a52959b" (UID: "adffe427-25cd-485c-b35d-7bf52a52959b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.141340 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bdxj\" (UniqueName: \"kubernetes.io/projected/adffe427-25cd-485c-b35d-7bf52a52959b-kube-api-access-6bdxj\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.141722 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adffe427-25cd-485c-b35d-7bf52a52959b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.141812 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adffe427-25cd-485c-b35d-7bf52a52959b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.450935 4937 generic.go:334] "Generic (PLEG): container finished" podID="adffe427-25cd-485c-b35d-7bf52a52959b" containerID="27eafaa557b3bea26021290631c598a2db4eed000729020a77e060a3c4628d20" exitCode=0 Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.450982 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f5nlh" event={"ID":"adffe427-25cd-485c-b35d-7bf52a52959b","Type":"ContainerDied","Data":"27eafaa557b3bea26021290631c598a2db4eed000729020a77e060a3c4628d20"} Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.450997 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f5nlh" Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.451006 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f5nlh" event={"ID":"adffe427-25cd-485c-b35d-7bf52a52959b","Type":"ContainerDied","Data":"f59a11cd6df9053c9c2ad8c1db6e0bc90902a09bd0cb106a87b8844071c98e6f"} Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.451025 4937 scope.go:117] "RemoveContainer" containerID="27eafaa557b3bea26021290631c598a2db4eed000729020a77e060a3c4628d20" Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.472540 4937 scope.go:117] "RemoveContainer" containerID="ef99ea0f82c936cd02ea3620486066614814367c88519f1606cb3a254a673c77" Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.495003 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f5nlh"] Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.504524 4937 scope.go:117] "RemoveContainer" containerID="2e5b6bd1e82a5f595259973a61eb1f3d65329933cb0debbd3ce9c245afa0d2b3" Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.505502 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f5nlh"] Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.540897 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adffe427-25cd-485c-b35d-7bf52a52959b" path="/var/lib/kubelet/pods/adffe427-25cd-485c-b35d-7bf52a52959b/volumes" Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.549753 4937 scope.go:117] "RemoveContainer" containerID="27eafaa557b3bea26021290631c598a2db4eed000729020a77e060a3c4628d20" Jan 23 08:17:22 crc kubenswrapper[4937]: E0123 08:17:22.551443 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27eafaa557b3bea26021290631c598a2db4eed000729020a77e060a3c4628d20\": container with ID starting with 27eafaa557b3bea26021290631c598a2db4eed000729020a77e060a3c4628d20 not found: ID does not exist" containerID="27eafaa557b3bea26021290631c598a2db4eed000729020a77e060a3c4628d20" Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.551502 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27eafaa557b3bea26021290631c598a2db4eed000729020a77e060a3c4628d20"} err="failed to get container status \"27eafaa557b3bea26021290631c598a2db4eed000729020a77e060a3c4628d20\": rpc error: code = NotFound desc = could not find container \"27eafaa557b3bea26021290631c598a2db4eed000729020a77e060a3c4628d20\": container with ID starting with 27eafaa557b3bea26021290631c598a2db4eed000729020a77e060a3c4628d20 not found: ID does not exist" Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.551535 4937 scope.go:117] "RemoveContainer" containerID="ef99ea0f82c936cd02ea3620486066614814367c88519f1606cb3a254a673c77" Jan 23 08:17:22 crc kubenswrapper[4937]: E0123 08:17:22.553077 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef99ea0f82c936cd02ea3620486066614814367c88519f1606cb3a254a673c77\": container with ID starting with ef99ea0f82c936cd02ea3620486066614814367c88519f1606cb3a254a673c77 not found: ID does not exist" containerID="ef99ea0f82c936cd02ea3620486066614814367c88519f1606cb3a254a673c77" Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.553113 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef99ea0f82c936cd02ea3620486066614814367c88519f1606cb3a254a673c77"} err="failed to get container status \"ef99ea0f82c936cd02ea3620486066614814367c88519f1606cb3a254a673c77\": rpc error: code = NotFound desc = could not find container \"ef99ea0f82c936cd02ea3620486066614814367c88519f1606cb3a254a673c77\": container with ID starting with ef99ea0f82c936cd02ea3620486066614814367c88519f1606cb3a254a673c77 not found: ID does not exist" Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.553133 4937 scope.go:117] "RemoveContainer" containerID="2e5b6bd1e82a5f595259973a61eb1f3d65329933cb0debbd3ce9c245afa0d2b3" Jan 23 08:17:22 crc kubenswrapper[4937]: E0123 08:17:22.553453 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e5b6bd1e82a5f595259973a61eb1f3d65329933cb0debbd3ce9c245afa0d2b3\": container with ID starting with 2e5b6bd1e82a5f595259973a61eb1f3d65329933cb0debbd3ce9c245afa0d2b3 not found: ID does not exist" containerID="2e5b6bd1e82a5f595259973a61eb1f3d65329933cb0debbd3ce9c245afa0d2b3" Jan 23 08:17:22 crc kubenswrapper[4937]: I0123 08:17:22.553482 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e5b6bd1e82a5f595259973a61eb1f3d65329933cb0debbd3ce9c245afa0d2b3"} err="failed to get container status \"2e5b6bd1e82a5f595259973a61eb1f3d65329933cb0debbd3ce9c245afa0d2b3\": rpc error: code = NotFound desc = could not find container \"2e5b6bd1e82a5f595259973a61eb1f3d65329933cb0debbd3ce9c245afa0d2b3\": container with ID starting with 2e5b6bd1e82a5f595259973a61eb1f3d65329933cb0debbd3ce9c245afa0d2b3 not found: ID does not exist" Jan 23 08:17:35 crc kubenswrapper[4937]: I0123 08:17:35.527070 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:17:35 crc kubenswrapper[4937]: E0123 08:17:35.527808 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:17:49 crc kubenswrapper[4937]: I0123 08:17:49.526708 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:17:49 crc kubenswrapper[4937]: E0123 08:17:49.527547 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:18:00 crc kubenswrapper[4937]: I0123 08:18:00.538857 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:18:00 crc kubenswrapper[4937]: E0123 08:18:00.539627 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:18:14 crc kubenswrapper[4937]: I0123 08:18:14.527176 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:18:14 crc kubenswrapper[4937]: E0123 08:18:14.527979 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:18:29 crc kubenswrapper[4937]: I0123 08:18:29.527782 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:18:29 crc kubenswrapper[4937]: E0123 08:18:29.528610 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:18:41 crc kubenswrapper[4937]: I0123 08:18:41.526627 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:18:41 crc kubenswrapper[4937]: E0123 08:18:41.527581 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:18:56 crc kubenswrapper[4937]: I0123 08:18:56.527132 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:18:56 crc kubenswrapper[4937]: E0123 08:18:56.527975 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:19:08 crc kubenswrapper[4937]: I0123 08:19:08.526495 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:19:09 crc kubenswrapper[4937]: I0123 08:19:09.566330 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"66783b8294a23c431916f9da9652fe4cd266dcea80d119e56b082c7f71b7ed3e"} Jan 23 08:20:04 crc kubenswrapper[4937]: I0123 08:20:04.371756 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h5dgb"] Jan 23 08:20:04 crc kubenswrapper[4937]: E0123 08:20:04.376234 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adffe427-25cd-485c-b35d-7bf52a52959b" containerName="registry-server" Jan 23 08:20:04 crc kubenswrapper[4937]: I0123 08:20:04.376304 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="adffe427-25cd-485c-b35d-7bf52a52959b" containerName="registry-server" Jan 23 08:20:04 crc kubenswrapper[4937]: E0123 08:20:04.376346 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adffe427-25cd-485c-b35d-7bf52a52959b" containerName="extract-content" Jan 23 08:20:04 crc kubenswrapper[4937]: I0123 08:20:04.376388 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="adffe427-25cd-485c-b35d-7bf52a52959b" containerName="extract-content" Jan 23 08:20:04 crc kubenswrapper[4937]: E0123 08:20:04.376404 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adffe427-25cd-485c-b35d-7bf52a52959b" containerName="extract-utilities" Jan 23 08:20:04 crc kubenswrapper[4937]: I0123 08:20:04.376413 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="adffe427-25cd-485c-b35d-7bf52a52959b" containerName="extract-utilities" Jan 23 08:20:04 crc kubenswrapper[4937]: I0123 08:20:04.377188 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="adffe427-25cd-485c-b35d-7bf52a52959b" containerName="registry-server" Jan 23 08:20:04 crc kubenswrapper[4937]: I0123 08:20:04.379137 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5dgb" Jan 23 08:20:04 crc kubenswrapper[4937]: I0123 08:20:04.386159 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h5dgb"] Jan 23 08:20:04 crc kubenswrapper[4937]: I0123 08:20:04.536278 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b01c18b-d2c4-4e1d-8d35-50ea8eb39031-catalog-content\") pod \"community-operators-h5dgb\" (UID: \"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031\") " pod="openshift-marketplace/community-operators-h5dgb" Jan 23 08:20:04 crc kubenswrapper[4937]: I0123 08:20:04.536565 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b01c18b-d2c4-4e1d-8d35-50ea8eb39031-utilities\") pod \"community-operators-h5dgb\" (UID: \"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031\") " pod="openshift-marketplace/community-operators-h5dgb" Jan 23 08:20:04 crc kubenswrapper[4937]: I0123 08:20:04.536697 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sksgs\" (UniqueName: \"kubernetes.io/projected/8b01c18b-d2c4-4e1d-8d35-50ea8eb39031-kube-api-access-sksgs\") pod \"community-operators-h5dgb\" (UID: \"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031\") " pod="openshift-marketplace/community-operators-h5dgb" Jan 23 08:20:04 crc kubenswrapper[4937]: I0123 08:20:04.638397 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b01c18b-d2c4-4e1d-8d35-50ea8eb39031-catalog-content\") pod \"community-operators-h5dgb\" (UID: \"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031\") " pod="openshift-marketplace/community-operators-h5dgb" Jan 23 08:20:04 crc kubenswrapper[4937]: I0123 08:20:04.638709 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b01c18b-d2c4-4e1d-8d35-50ea8eb39031-utilities\") pod \"community-operators-h5dgb\" (UID: \"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031\") " pod="openshift-marketplace/community-operators-h5dgb" Jan 23 08:20:04 crc kubenswrapper[4937]: I0123 08:20:04.638752 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sksgs\" (UniqueName: \"kubernetes.io/projected/8b01c18b-d2c4-4e1d-8d35-50ea8eb39031-kube-api-access-sksgs\") pod \"community-operators-h5dgb\" (UID: \"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031\") " pod="openshift-marketplace/community-operators-h5dgb" Jan 23 08:20:04 crc kubenswrapper[4937]: I0123 08:20:04.639389 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b01c18b-d2c4-4e1d-8d35-50ea8eb39031-catalog-content\") pod \"community-operators-h5dgb\" (UID: \"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031\") " pod="openshift-marketplace/community-operators-h5dgb" Jan 23 08:20:04 crc kubenswrapper[4937]: I0123 08:20:04.639936 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b01c18b-d2c4-4e1d-8d35-50ea8eb39031-utilities\") pod \"community-operators-h5dgb\" (UID: \"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031\") " pod="openshift-marketplace/community-operators-h5dgb" Jan 23 08:20:04 crc kubenswrapper[4937]: I0123 08:20:04.663522 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sksgs\" (UniqueName: \"kubernetes.io/projected/8b01c18b-d2c4-4e1d-8d35-50ea8eb39031-kube-api-access-sksgs\") pod \"community-operators-h5dgb\" (UID: \"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031\") " pod="openshift-marketplace/community-operators-h5dgb" Jan 23 08:20:04 crc kubenswrapper[4937]: I0123 08:20:04.699118 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5dgb" Jan 23 08:20:05 crc kubenswrapper[4937]: I0123 08:20:05.219486 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h5dgb"] Jan 23 08:20:06 crc kubenswrapper[4937]: I0123 08:20:06.059876 4937 generic.go:334] "Generic (PLEG): container finished" podID="8b01c18b-d2c4-4e1d-8d35-50ea8eb39031" containerID="ef234eb896fc812344e25e2d774ab30151f2289cc1aa810626282d47c9c77628" exitCode=0 Jan 23 08:20:06 crc kubenswrapper[4937]: I0123 08:20:06.060308 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgb" event={"ID":"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031","Type":"ContainerDied","Data":"ef234eb896fc812344e25e2d774ab30151f2289cc1aa810626282d47c9c77628"} Jan 23 08:20:06 crc kubenswrapper[4937]: I0123 08:20:06.061616 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgb" event={"ID":"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031","Type":"ContainerStarted","Data":"5eacb53d122f1b359044024b4033aa6a26c1eb9291bb6bd53da3bf48ddf3d859"} Jan 23 08:20:07 crc kubenswrapper[4937]: I0123 08:20:07.072821 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgb" event={"ID":"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031","Type":"ContainerStarted","Data":"d7a7d317c6055d27d14522396b11c197ed84d87a8e7539cc27c868be2fc68419"} Jan 23 08:20:08 crc kubenswrapper[4937]: I0123 08:20:08.085207 4937 generic.go:334] "Generic (PLEG): container finished" podID="8b01c18b-d2c4-4e1d-8d35-50ea8eb39031" containerID="d7a7d317c6055d27d14522396b11c197ed84d87a8e7539cc27c868be2fc68419" exitCode=0 Jan 23 08:20:08 crc kubenswrapper[4937]: I0123 08:20:08.085243 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgb" event={"ID":"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031","Type":"ContainerDied","Data":"d7a7d317c6055d27d14522396b11c197ed84d87a8e7539cc27c868be2fc68419"} Jan 23 08:20:09 crc kubenswrapper[4937]: I0123 08:20:09.097555 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgb" event={"ID":"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031","Type":"ContainerStarted","Data":"ca66bb246d9cdbb2c43a03d75c2f9a42ad566e302ee3b1354cb782f8ed1c4651"} Jan 23 08:20:09 crc kubenswrapper[4937]: I0123 08:20:09.123006 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h5dgb" podStartSLOduration=2.522856875 podStartE2EDuration="5.122983694s" podCreationTimestamp="2026-01-23 08:20:04 +0000 UTC" firstStartedPulling="2026-01-23 08:20:06.063531983 +0000 UTC m=+6405.867298636" lastFinishedPulling="2026-01-23 08:20:08.663658802 +0000 UTC m=+6408.467425455" observedRunningTime="2026-01-23 08:20:09.118360869 +0000 UTC m=+6408.922127522" watchObservedRunningTime="2026-01-23 08:20:09.122983694 +0000 UTC m=+6408.926750347" Jan 23 08:20:14 crc kubenswrapper[4937]: I0123 08:20:14.700090 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h5dgb" Jan 23 08:20:14 crc kubenswrapper[4937]: I0123 08:20:14.700557 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h5dgb" Jan 23 08:20:14 crc kubenswrapper[4937]: I0123 08:20:14.748681 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h5dgb" Jan 23 08:20:15 crc kubenswrapper[4937]: I0123 08:20:15.214959 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h5dgb" Jan 23 08:20:15 crc kubenswrapper[4937]: I0123 08:20:15.285760 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h5dgb"] Jan 23 08:20:17 crc kubenswrapper[4937]: I0123 08:20:17.179647 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h5dgb" podUID="8b01c18b-d2c4-4e1d-8d35-50ea8eb39031" containerName="registry-server" containerID="cri-o://ca66bb246d9cdbb2c43a03d75c2f9a42ad566e302ee3b1354cb782f8ed1c4651" gracePeriod=2 Jan 23 08:20:17 crc kubenswrapper[4937]: I0123 08:20:17.778153 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5dgb" Jan 23 08:20:17 crc kubenswrapper[4937]: I0123 08:20:17.871782 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b01c18b-d2c4-4e1d-8d35-50ea8eb39031-catalog-content\") pod \"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031\" (UID: \"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031\") " Jan 23 08:20:17 crc kubenswrapper[4937]: I0123 08:20:17.871877 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sksgs\" (UniqueName: \"kubernetes.io/projected/8b01c18b-d2c4-4e1d-8d35-50ea8eb39031-kube-api-access-sksgs\") pod \"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031\" (UID: \"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031\") " Jan 23 08:20:17 crc kubenswrapper[4937]: I0123 08:20:17.872019 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b01c18b-d2c4-4e1d-8d35-50ea8eb39031-utilities\") pod \"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031\" (UID: \"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031\") " Jan 23 08:20:17 crc kubenswrapper[4937]: I0123 08:20:17.872924 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b01c18b-d2c4-4e1d-8d35-50ea8eb39031-utilities" (OuterVolumeSpecName: "utilities") pod "8b01c18b-d2c4-4e1d-8d35-50ea8eb39031" (UID: "8b01c18b-d2c4-4e1d-8d35-50ea8eb39031"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:17 crc kubenswrapper[4937]: I0123 08:20:17.882052 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b01c18b-d2c4-4e1d-8d35-50ea8eb39031-kube-api-access-sksgs" (OuterVolumeSpecName: "kube-api-access-sksgs") pod "8b01c18b-d2c4-4e1d-8d35-50ea8eb39031" (UID: "8b01c18b-d2c4-4e1d-8d35-50ea8eb39031"). InnerVolumeSpecName "kube-api-access-sksgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:17 crc kubenswrapper[4937]: I0123 08:20:17.924852 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b01c18b-d2c4-4e1d-8d35-50ea8eb39031-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b01c18b-d2c4-4e1d-8d35-50ea8eb39031" (UID: "8b01c18b-d2c4-4e1d-8d35-50ea8eb39031"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:17 crc kubenswrapper[4937]: I0123 08:20:17.975419 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b01c18b-d2c4-4e1d-8d35-50ea8eb39031-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:17 crc kubenswrapper[4937]: I0123 08:20:17.975465 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b01c18b-d2c4-4e1d-8d35-50ea8eb39031-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:17 crc kubenswrapper[4937]: I0123 08:20:17.975481 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sksgs\" (UniqueName: \"kubernetes.io/projected/8b01c18b-d2c4-4e1d-8d35-50ea8eb39031-kube-api-access-sksgs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:18 crc kubenswrapper[4937]: I0123 08:20:18.192070 4937 generic.go:334] "Generic (PLEG): container finished" podID="8b01c18b-d2c4-4e1d-8d35-50ea8eb39031" containerID="ca66bb246d9cdbb2c43a03d75c2f9a42ad566e302ee3b1354cb782f8ed1c4651" exitCode=0 Jan 23 08:20:18 crc kubenswrapper[4937]: I0123 08:20:18.192138 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgb" event={"ID":"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031","Type":"ContainerDied","Data":"ca66bb246d9cdbb2c43a03d75c2f9a42ad566e302ee3b1354cb782f8ed1c4651"} Jan 23 08:20:18 crc kubenswrapper[4937]: I0123 08:20:18.192220 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5dgb" event={"ID":"8b01c18b-d2c4-4e1d-8d35-50ea8eb39031","Type":"ContainerDied","Data":"5eacb53d122f1b359044024b4033aa6a26c1eb9291bb6bd53da3bf48ddf3d859"} Jan 23 08:20:18 crc kubenswrapper[4937]: I0123 08:20:18.192245 4937 scope.go:117] "RemoveContainer" containerID="ca66bb246d9cdbb2c43a03d75c2f9a42ad566e302ee3b1354cb782f8ed1c4651" Jan 23 08:20:18 crc kubenswrapper[4937]: I0123 08:20:18.192163 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5dgb" Jan 23 08:20:18 crc kubenswrapper[4937]: I0123 08:20:18.217366 4937 scope.go:117] "RemoveContainer" containerID="d7a7d317c6055d27d14522396b11c197ed84d87a8e7539cc27c868be2fc68419" Jan 23 08:20:18 crc kubenswrapper[4937]: I0123 08:20:18.247410 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h5dgb"] Jan 23 08:20:18 crc kubenswrapper[4937]: I0123 08:20:18.254903 4937 scope.go:117] "RemoveContainer" containerID="ef234eb896fc812344e25e2d774ab30151f2289cc1aa810626282d47c9c77628" Jan 23 08:20:18 crc kubenswrapper[4937]: I0123 08:20:18.258409 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h5dgb"] Jan 23 08:20:18 crc kubenswrapper[4937]: I0123 08:20:18.286651 4937 scope.go:117] "RemoveContainer" containerID="ca66bb246d9cdbb2c43a03d75c2f9a42ad566e302ee3b1354cb782f8ed1c4651" Jan 23 08:20:18 crc kubenswrapper[4937]: E0123 08:20:18.287087 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca66bb246d9cdbb2c43a03d75c2f9a42ad566e302ee3b1354cb782f8ed1c4651\": container with ID starting with ca66bb246d9cdbb2c43a03d75c2f9a42ad566e302ee3b1354cb782f8ed1c4651 not found: ID does not exist" containerID="ca66bb246d9cdbb2c43a03d75c2f9a42ad566e302ee3b1354cb782f8ed1c4651" Jan 23 08:20:18 crc kubenswrapper[4937]: I0123 08:20:18.287125 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca66bb246d9cdbb2c43a03d75c2f9a42ad566e302ee3b1354cb782f8ed1c4651"} err="failed to get container status \"ca66bb246d9cdbb2c43a03d75c2f9a42ad566e302ee3b1354cb782f8ed1c4651\": rpc error: code = NotFound desc = could not find container \"ca66bb246d9cdbb2c43a03d75c2f9a42ad566e302ee3b1354cb782f8ed1c4651\": container with ID starting with ca66bb246d9cdbb2c43a03d75c2f9a42ad566e302ee3b1354cb782f8ed1c4651 not found: ID does not exist" Jan 23 08:20:18 crc kubenswrapper[4937]: I0123 08:20:18.287151 4937 scope.go:117] "RemoveContainer" containerID="d7a7d317c6055d27d14522396b11c197ed84d87a8e7539cc27c868be2fc68419" Jan 23 08:20:18 crc kubenswrapper[4937]: E0123 08:20:18.287497 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7a7d317c6055d27d14522396b11c197ed84d87a8e7539cc27c868be2fc68419\": container with ID starting with d7a7d317c6055d27d14522396b11c197ed84d87a8e7539cc27c868be2fc68419 not found: ID does not exist" containerID="d7a7d317c6055d27d14522396b11c197ed84d87a8e7539cc27c868be2fc68419" Jan 23 08:20:18 crc kubenswrapper[4937]: I0123 08:20:18.287520 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7a7d317c6055d27d14522396b11c197ed84d87a8e7539cc27c868be2fc68419"} err="failed to get container status \"d7a7d317c6055d27d14522396b11c197ed84d87a8e7539cc27c868be2fc68419\": rpc error: code = NotFound desc = could not find container \"d7a7d317c6055d27d14522396b11c197ed84d87a8e7539cc27c868be2fc68419\": container with ID starting with d7a7d317c6055d27d14522396b11c197ed84d87a8e7539cc27c868be2fc68419 not found: ID does not exist" Jan 23 08:20:18 crc kubenswrapper[4937]: I0123 08:20:18.287536 4937 scope.go:117] "RemoveContainer" containerID="ef234eb896fc812344e25e2d774ab30151f2289cc1aa810626282d47c9c77628" Jan 23 08:20:18 crc kubenswrapper[4937]: E0123 08:20:18.287877 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef234eb896fc812344e25e2d774ab30151f2289cc1aa810626282d47c9c77628\": container with ID starting with ef234eb896fc812344e25e2d774ab30151f2289cc1aa810626282d47c9c77628 not found: ID does not exist" containerID="ef234eb896fc812344e25e2d774ab30151f2289cc1aa810626282d47c9c77628" Jan 23 08:20:18 crc kubenswrapper[4937]: I0123 08:20:18.287899 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef234eb896fc812344e25e2d774ab30151f2289cc1aa810626282d47c9c77628"} err="failed to get container status \"ef234eb896fc812344e25e2d774ab30151f2289cc1aa810626282d47c9c77628\": rpc error: code = NotFound desc = could not find container \"ef234eb896fc812344e25e2d774ab30151f2289cc1aa810626282d47c9c77628\": container with ID starting with ef234eb896fc812344e25e2d774ab30151f2289cc1aa810626282d47c9c77628 not found: ID does not exist" Jan 23 08:20:18 crc kubenswrapper[4937]: I0123 08:20:18.543326 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b01c18b-d2c4-4e1d-8d35-50ea8eb39031" path="/var/lib/kubelet/pods/8b01c18b-d2c4-4e1d-8d35-50ea8eb39031/volumes" Jan 23 08:21:37 crc kubenswrapper[4937]: I0123 08:21:37.723946 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:21:37 crc kubenswrapper[4937]: I0123 08:21:37.724524 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:21:45 crc kubenswrapper[4937]: I0123 08:21:45.914761 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gl88x"] Jan 23 08:21:45 crc kubenswrapper[4937]: E0123 08:21:45.915790 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b01c18b-d2c4-4e1d-8d35-50ea8eb39031" containerName="extract-utilities" Jan 23 08:21:45 crc kubenswrapper[4937]: I0123 08:21:45.915808 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b01c18b-d2c4-4e1d-8d35-50ea8eb39031" containerName="extract-utilities" Jan 23 08:21:45 crc kubenswrapper[4937]: E0123 08:21:45.915852 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b01c18b-d2c4-4e1d-8d35-50ea8eb39031" containerName="registry-server" Jan 23 08:21:45 crc kubenswrapper[4937]: I0123 08:21:45.915861 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b01c18b-d2c4-4e1d-8d35-50ea8eb39031" containerName="registry-server" Jan 23 08:21:45 crc kubenswrapper[4937]: E0123 08:21:45.915886 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b01c18b-d2c4-4e1d-8d35-50ea8eb39031" containerName="extract-content" Jan 23 08:21:45 crc kubenswrapper[4937]: I0123 08:21:45.915913 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b01c18b-d2c4-4e1d-8d35-50ea8eb39031" containerName="extract-content" Jan 23 08:21:45 crc kubenswrapper[4937]: I0123 08:21:45.916180 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b01c18b-d2c4-4e1d-8d35-50ea8eb39031" containerName="registry-server" Jan 23 08:21:45 crc kubenswrapper[4937]: I0123 08:21:45.918260 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gl88x" Jan 23 08:21:45 crc kubenswrapper[4937]: I0123 08:21:45.949258 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gl88x"] Jan 23 08:21:46 crc kubenswrapper[4937]: I0123 08:21:46.057045 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mnq4\" (UniqueName: \"kubernetes.io/projected/5fe53d62-c089-4443-9064-aceef9768c8a-kube-api-access-8mnq4\") pod \"redhat-operators-gl88x\" (UID: \"5fe53d62-c089-4443-9064-aceef9768c8a\") " pod="openshift-marketplace/redhat-operators-gl88x" Jan 23 08:21:46 crc kubenswrapper[4937]: I0123 08:21:46.057159 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe53d62-c089-4443-9064-aceef9768c8a-catalog-content\") pod \"redhat-operators-gl88x\" (UID: \"5fe53d62-c089-4443-9064-aceef9768c8a\") " pod="openshift-marketplace/redhat-operators-gl88x" Jan 23 08:21:46 crc kubenswrapper[4937]: I0123 08:21:46.057306 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe53d62-c089-4443-9064-aceef9768c8a-utilities\") pod \"redhat-operators-gl88x\" (UID: \"5fe53d62-c089-4443-9064-aceef9768c8a\") " pod="openshift-marketplace/redhat-operators-gl88x" Jan 23 08:21:46 crc kubenswrapper[4937]: I0123 08:21:46.159280 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mnq4\" (UniqueName: \"kubernetes.io/projected/5fe53d62-c089-4443-9064-aceef9768c8a-kube-api-access-8mnq4\") pod \"redhat-operators-gl88x\" (UID: \"5fe53d62-c089-4443-9064-aceef9768c8a\") " pod="openshift-marketplace/redhat-operators-gl88x" Jan 23 08:21:46 crc kubenswrapper[4937]: I0123 08:21:46.159414 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe53d62-c089-4443-9064-aceef9768c8a-catalog-content\") pod \"redhat-operators-gl88x\" (UID: \"5fe53d62-c089-4443-9064-aceef9768c8a\") " pod="openshift-marketplace/redhat-operators-gl88x" Jan 23 08:21:46 crc kubenswrapper[4937]: I0123 08:21:46.159453 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe53d62-c089-4443-9064-aceef9768c8a-utilities\") pod \"redhat-operators-gl88x\" (UID: \"5fe53d62-c089-4443-9064-aceef9768c8a\") " pod="openshift-marketplace/redhat-operators-gl88x" Jan 23 08:21:46 crc kubenswrapper[4937]: I0123 08:21:46.160013 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe53d62-c089-4443-9064-aceef9768c8a-catalog-content\") pod \"redhat-operators-gl88x\" (UID: \"5fe53d62-c089-4443-9064-aceef9768c8a\") " pod="openshift-marketplace/redhat-operators-gl88x" Jan 23 08:21:46 crc kubenswrapper[4937]: I0123 08:21:46.160098 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe53d62-c089-4443-9064-aceef9768c8a-utilities\") pod \"redhat-operators-gl88x\" (UID: \"5fe53d62-c089-4443-9064-aceef9768c8a\") " pod="openshift-marketplace/redhat-operators-gl88x" Jan 23 08:21:46 crc kubenswrapper[4937]: I0123 08:21:46.182709 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mnq4\" (UniqueName: \"kubernetes.io/projected/5fe53d62-c089-4443-9064-aceef9768c8a-kube-api-access-8mnq4\") pod \"redhat-operators-gl88x\" (UID: \"5fe53d62-c089-4443-9064-aceef9768c8a\") " pod="openshift-marketplace/redhat-operators-gl88x" Jan 23 08:21:46 crc kubenswrapper[4937]: I0123 08:21:46.246318 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gl88x" Jan 23 08:21:46 crc kubenswrapper[4937]: I0123 08:21:46.737754 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gl88x"] Jan 23 08:21:46 crc kubenswrapper[4937]: I0123 08:21:46.983860 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl88x" event={"ID":"5fe53d62-c089-4443-9064-aceef9768c8a","Type":"ContainerStarted","Data":"844a4112936f4420e56a87131bdf0cf6448a97410176e83542cb8507a68361ef"} Jan 23 08:21:47 crc kubenswrapper[4937]: I0123 08:21:47.995472 4937 generic.go:334] "Generic (PLEG): container finished" podID="5fe53d62-c089-4443-9064-aceef9768c8a" containerID="c07315b43ff251a5aa76eebe7f6f6d8f92f164c1bfbaade1dc64e406ef2dc471" exitCode=0 Jan 23 08:21:47 crc kubenswrapper[4937]: I0123 08:21:47.995525 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl88x" event={"ID":"5fe53d62-c089-4443-9064-aceef9768c8a","Type":"ContainerDied","Data":"c07315b43ff251a5aa76eebe7f6f6d8f92f164c1bfbaade1dc64e406ef2dc471"} Jan 23 08:21:50 crc kubenswrapper[4937]: I0123 08:21:50.029877 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl88x" event={"ID":"5fe53d62-c089-4443-9064-aceef9768c8a","Type":"ContainerStarted","Data":"1cbab134667ce1b97d5dd734a4e6bb787c10a1502adf94feebba01fa8e4b82c7"} Jan 23 08:21:53 crc kubenswrapper[4937]: I0123 08:21:53.054145 4937 generic.go:334] "Generic (PLEG): container finished" podID="5fe53d62-c089-4443-9064-aceef9768c8a" containerID="1cbab134667ce1b97d5dd734a4e6bb787c10a1502adf94feebba01fa8e4b82c7" exitCode=0 Jan 23 08:21:53 crc kubenswrapper[4937]: I0123 08:21:53.054215 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl88x" event={"ID":"5fe53d62-c089-4443-9064-aceef9768c8a","Type":"ContainerDied","Data":"1cbab134667ce1b97d5dd734a4e6bb787c10a1502adf94feebba01fa8e4b82c7"} Jan 23 08:21:55 crc kubenswrapper[4937]: I0123 08:21:55.074692 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl88x" event={"ID":"5fe53d62-c089-4443-9064-aceef9768c8a","Type":"ContainerStarted","Data":"693e38559da192e5f07ff4ce30db52bd5731326a394efb8270f86b1b94bbe9ff"} Jan 23 08:21:55 crc kubenswrapper[4937]: I0123 08:21:55.103159 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gl88x" podStartSLOduration=3.62288765 podStartE2EDuration="10.103140029s" podCreationTimestamp="2026-01-23 08:21:45 +0000 UTC" firstStartedPulling="2026-01-23 08:21:47.999009777 +0000 UTC m=+6507.802776440" lastFinishedPulling="2026-01-23 08:21:54.479262166 +0000 UTC m=+6514.283028819" observedRunningTime="2026-01-23 08:21:55.094988637 +0000 UTC m=+6514.898755310" watchObservedRunningTime="2026-01-23 08:21:55.103140029 +0000 UTC m=+6514.906906682" Jan 23 08:21:56 crc kubenswrapper[4937]: I0123 08:21:56.247364 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gl88x" Jan 23 08:21:56 crc kubenswrapper[4937]: I0123 08:21:56.248677 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gl88x" Jan 23 08:21:57 crc kubenswrapper[4937]: I0123 08:21:57.295248 4937 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gl88x" podUID="5fe53d62-c089-4443-9064-aceef9768c8a" containerName="registry-server" probeResult="failure" output=< Jan 23 08:21:57 crc kubenswrapper[4937]: timeout: failed to connect service ":50051" within 1s Jan 23 08:21:57 crc kubenswrapper[4937]: > Jan 23 08:22:04 crc kubenswrapper[4937]: I0123 08:22:04.979522 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l68qm"] Jan 23 08:22:04 crc kubenswrapper[4937]: I0123 08:22:04.983182 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l68qm" Jan 23 08:22:04 crc kubenswrapper[4937]: I0123 08:22:04.985166 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cbce9fd-ce3a-4fca-bfea-5a837d39be7c-utilities\") pod \"certified-operators-l68qm\" (UID: \"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c\") " pod="openshift-marketplace/certified-operators-l68qm" Jan 23 08:22:04 crc kubenswrapper[4937]: I0123 08:22:04.985265 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cbce9fd-ce3a-4fca-bfea-5a837d39be7c-catalog-content\") pod \"certified-operators-l68qm\" (UID: \"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c\") " pod="openshift-marketplace/certified-operators-l68qm" Jan 23 08:22:04 crc kubenswrapper[4937]: I0123 08:22:04.985313 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr96b\" (UniqueName: \"kubernetes.io/projected/9cbce9fd-ce3a-4fca-bfea-5a837d39be7c-kube-api-access-dr96b\") pod \"certified-operators-l68qm\" (UID: \"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c\") " pod="openshift-marketplace/certified-operators-l68qm" Jan 23 08:22:04 crc kubenswrapper[4937]: I0123 08:22:04.990386 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l68qm"] Jan 23 08:22:05 crc kubenswrapper[4937]: I0123 08:22:05.087076 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr96b\" (UniqueName: \"kubernetes.io/projected/9cbce9fd-ce3a-4fca-bfea-5a837d39be7c-kube-api-access-dr96b\") pod \"certified-operators-l68qm\" (UID: \"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c\") " pod="openshift-marketplace/certified-operators-l68qm" Jan 23 08:22:05 crc kubenswrapper[4937]: I0123 08:22:05.087447 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cbce9fd-ce3a-4fca-bfea-5a837d39be7c-utilities\") pod \"certified-operators-l68qm\" (UID: \"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c\") " pod="openshift-marketplace/certified-operators-l68qm" Jan 23 08:22:05 crc kubenswrapper[4937]: I0123 08:22:05.087568 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cbce9fd-ce3a-4fca-bfea-5a837d39be7c-catalog-content\") pod \"certified-operators-l68qm\" (UID: \"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c\") " pod="openshift-marketplace/certified-operators-l68qm" Jan 23 08:22:05 crc kubenswrapper[4937]: I0123 08:22:05.087948 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cbce9fd-ce3a-4fca-bfea-5a837d39be7c-utilities\") pod \"certified-operators-l68qm\" (UID: \"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c\") " pod="openshift-marketplace/certified-operators-l68qm" Jan 23 08:22:05 crc kubenswrapper[4937]: I0123 08:22:05.087995 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cbce9fd-ce3a-4fca-bfea-5a837d39be7c-catalog-content\") pod \"certified-operators-l68qm\" (UID: \"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c\") " pod="openshift-marketplace/certified-operators-l68qm" Jan 23 08:22:05 crc kubenswrapper[4937]: I0123 08:22:05.110482 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr96b\" (UniqueName: \"kubernetes.io/projected/9cbce9fd-ce3a-4fca-bfea-5a837d39be7c-kube-api-access-dr96b\") pod \"certified-operators-l68qm\" (UID: \"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c\") " pod="openshift-marketplace/certified-operators-l68qm" Jan 23 08:22:05 crc kubenswrapper[4937]: I0123 08:22:05.316178 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l68qm" Jan 23 08:22:05 crc kubenswrapper[4937]: I0123 08:22:05.872789 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l68qm"] Jan 23 08:22:06 crc kubenswrapper[4937]: I0123 08:22:06.171651 4937 generic.go:334] "Generic (PLEG): container finished" podID="9cbce9fd-ce3a-4fca-bfea-5a837d39be7c" containerID="5498d947a13e75799c71080484b4b1998fc72116d9201336608ad055ccdb9b1b" exitCode=0 Jan 23 08:22:06 crc kubenswrapper[4937]: I0123 08:22:06.171774 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l68qm" event={"ID":"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c","Type":"ContainerDied","Data":"5498d947a13e75799c71080484b4b1998fc72116d9201336608ad055ccdb9b1b"} Jan 23 08:22:06 crc kubenswrapper[4937]: I0123 08:22:06.172913 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l68qm" event={"ID":"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c","Type":"ContainerStarted","Data":"60771c2b2e3f284541c8a17fcf4d49d2214072283f06cac722793bffeff70154"} Jan 23 08:22:06 crc kubenswrapper[4937]: I0123 08:22:06.294144 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gl88x" Jan 23 08:22:06 crc kubenswrapper[4937]: I0123 08:22:06.343615 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gl88x" Jan 23 08:22:07 crc kubenswrapper[4937]: I0123 08:22:07.184811 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l68qm" event={"ID":"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c","Type":"ContainerStarted","Data":"8752b28bc77b0eae8378ffd975ee0e26168fdedb59a41463b93135ccbfcace94"} Jan 23 08:22:07 crc kubenswrapper[4937]: I0123 08:22:07.724958 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:22:07 crc kubenswrapper[4937]: I0123 08:22:07.725582 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:22:08 crc kubenswrapper[4937]: I0123 08:22:08.194845 4937 generic.go:334] "Generic (PLEG): container finished" podID="9cbce9fd-ce3a-4fca-bfea-5a837d39be7c" containerID="8752b28bc77b0eae8378ffd975ee0e26168fdedb59a41463b93135ccbfcace94" exitCode=0 Jan 23 08:22:08 crc kubenswrapper[4937]: I0123 08:22:08.194906 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l68qm" event={"ID":"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c","Type":"ContainerDied","Data":"8752b28bc77b0eae8378ffd975ee0e26168fdedb59a41463b93135ccbfcace94"} Jan 23 08:22:08 crc kubenswrapper[4937]: I0123 08:22:08.553293 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gl88x"] Jan 23 08:22:08 crc kubenswrapper[4937]: I0123 08:22:08.553847 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gl88x" podUID="5fe53d62-c089-4443-9064-aceef9768c8a" containerName="registry-server" containerID="cri-o://693e38559da192e5f07ff4ce30db52bd5731326a394efb8270f86b1b94bbe9ff" gracePeriod=2 Jan 23 08:22:10 crc kubenswrapper[4937]: I0123 08:22:10.232711 4937 generic.go:334] "Generic (PLEG): container finished" podID="5fe53d62-c089-4443-9064-aceef9768c8a" containerID="693e38559da192e5f07ff4ce30db52bd5731326a394efb8270f86b1b94bbe9ff" exitCode=0 Jan 23 08:22:10 crc kubenswrapper[4937]: I0123 08:22:10.232895 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl88x" event={"ID":"5fe53d62-c089-4443-9064-aceef9768c8a","Type":"ContainerDied","Data":"693e38559da192e5f07ff4ce30db52bd5731326a394efb8270f86b1b94bbe9ff"} Jan 23 08:22:10 crc kubenswrapper[4937]: I0123 08:22:10.892436 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gl88x" Jan 23 08:22:11 crc kubenswrapper[4937]: I0123 08:22:11.013390 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe53d62-c089-4443-9064-aceef9768c8a-utilities\") pod \"5fe53d62-c089-4443-9064-aceef9768c8a\" (UID: \"5fe53d62-c089-4443-9064-aceef9768c8a\") " Jan 23 08:22:11 crc kubenswrapper[4937]: I0123 08:22:11.013580 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe53d62-c089-4443-9064-aceef9768c8a-catalog-content\") pod \"5fe53d62-c089-4443-9064-aceef9768c8a\" (UID: \"5fe53d62-c089-4443-9064-aceef9768c8a\") " Jan 23 08:22:11 crc kubenswrapper[4937]: I0123 08:22:11.013704 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mnq4\" (UniqueName: \"kubernetes.io/projected/5fe53d62-c089-4443-9064-aceef9768c8a-kube-api-access-8mnq4\") pod \"5fe53d62-c089-4443-9064-aceef9768c8a\" (UID: \"5fe53d62-c089-4443-9064-aceef9768c8a\") " Jan 23 08:22:11 crc kubenswrapper[4937]: I0123 08:22:11.015092 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fe53d62-c089-4443-9064-aceef9768c8a-utilities" (OuterVolumeSpecName: "utilities") pod "5fe53d62-c089-4443-9064-aceef9768c8a" (UID: "5fe53d62-c089-4443-9064-aceef9768c8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:22:11 crc kubenswrapper[4937]: I0123 08:22:11.019931 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe53d62-c089-4443-9064-aceef9768c8a-kube-api-access-8mnq4" (OuterVolumeSpecName: "kube-api-access-8mnq4") pod "5fe53d62-c089-4443-9064-aceef9768c8a" (UID: "5fe53d62-c089-4443-9064-aceef9768c8a"). InnerVolumeSpecName "kube-api-access-8mnq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:22:11 crc kubenswrapper[4937]: I0123 08:22:11.117848 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe53d62-c089-4443-9064-aceef9768c8a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:22:11 crc kubenswrapper[4937]: I0123 08:22:11.117964 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mnq4\" (UniqueName: \"kubernetes.io/projected/5fe53d62-c089-4443-9064-aceef9768c8a-kube-api-access-8mnq4\") on node \"crc\" DevicePath \"\"" Jan 23 08:22:11 crc kubenswrapper[4937]: I0123 08:22:11.143104 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fe53d62-c089-4443-9064-aceef9768c8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fe53d62-c089-4443-9064-aceef9768c8a" (UID: "5fe53d62-c089-4443-9064-aceef9768c8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:22:11 crc kubenswrapper[4937]: I0123 08:22:11.220078 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe53d62-c089-4443-9064-aceef9768c8a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:22:11 crc kubenswrapper[4937]: I0123 08:22:11.244158 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gl88x" event={"ID":"5fe53d62-c089-4443-9064-aceef9768c8a","Type":"ContainerDied","Data":"844a4112936f4420e56a87131bdf0cf6448a97410176e83542cb8507a68361ef"} Jan 23 08:22:11 crc kubenswrapper[4937]: I0123 08:22:11.244223 4937 scope.go:117] "RemoveContainer" containerID="693e38559da192e5f07ff4ce30db52bd5731326a394efb8270f86b1b94bbe9ff" Jan 23 08:22:11 crc kubenswrapper[4937]: I0123 08:22:11.244405 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gl88x" Jan 23 08:22:11 crc kubenswrapper[4937]: I0123 08:22:11.248961 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l68qm" event={"ID":"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c","Type":"ContainerStarted","Data":"58c0a6ac61826f2dc50b40f1a8e8c97c8f6504f3c29d7293108f00819ae9e66c"} Jan 23 08:22:11 crc kubenswrapper[4937]: I0123 08:22:11.264159 4937 scope.go:117] "RemoveContainer" containerID="1cbab134667ce1b97d5dd734a4e6bb787c10a1502adf94feebba01fa8e4b82c7" Jan 23 08:22:11 crc kubenswrapper[4937]: I0123 08:22:11.274400 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l68qm" podStartSLOduration=3.438574058 podStartE2EDuration="7.274381785s" podCreationTimestamp="2026-01-23 08:22:04 +0000 UTC" firstStartedPulling="2026-01-23 08:22:06.173699098 +0000 UTC m=+6525.977465741" lastFinishedPulling="2026-01-23 08:22:10.009506815 +0000 UTC m=+6529.813273468" observedRunningTime="2026-01-23 08:22:11.265155874 +0000 UTC m=+6531.068922527" watchObservedRunningTime="2026-01-23 08:22:11.274381785 +0000 UTC m=+6531.078148438" Jan 23 08:22:11 crc kubenswrapper[4937]: I0123 08:22:11.302639 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gl88x"] Jan 23 08:22:11 crc kubenswrapper[4937]: I0123 08:22:11.305919 4937 scope.go:117] "RemoveContainer" containerID="c07315b43ff251a5aa76eebe7f6f6d8f92f164c1bfbaade1dc64e406ef2dc471" Jan 23 08:22:11 crc kubenswrapper[4937]: I0123 08:22:11.312449 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gl88x"] Jan 23 08:22:12 crc kubenswrapper[4937]: I0123 08:22:12.556741 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe53d62-c089-4443-9064-aceef9768c8a" path="/var/lib/kubelet/pods/5fe53d62-c089-4443-9064-aceef9768c8a/volumes" Jan 23 08:22:15 crc kubenswrapper[4937]: I0123 08:22:15.316285 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l68qm" Jan 23 08:22:15 crc kubenswrapper[4937]: I0123 08:22:15.316624 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l68qm" Jan 23 08:22:15 crc kubenswrapper[4937]: I0123 08:22:15.369955 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l68qm" Jan 23 08:22:16 crc kubenswrapper[4937]: I0123 08:22:16.334705 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l68qm" Jan 23 08:22:17 crc kubenswrapper[4937]: I0123 08:22:17.756036 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l68qm"] Jan 23 08:22:18 crc kubenswrapper[4937]: I0123 08:22:18.482455 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l68qm" podUID="9cbce9fd-ce3a-4fca-bfea-5a837d39be7c" containerName="registry-server" containerID="cri-o://58c0a6ac61826f2dc50b40f1a8e8c97c8f6504f3c29d7293108f00819ae9e66c" gracePeriod=2 Jan 23 08:22:18 crc kubenswrapper[4937]: I0123 08:22:18.929400 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l68qm" Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.086420 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr96b\" (UniqueName: \"kubernetes.io/projected/9cbce9fd-ce3a-4fca-bfea-5a837d39be7c-kube-api-access-dr96b\") pod \"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c\" (UID: \"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c\") " Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.086713 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cbce9fd-ce3a-4fca-bfea-5a837d39be7c-utilities\") pod \"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c\" (UID: \"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c\") " Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.086860 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cbce9fd-ce3a-4fca-bfea-5a837d39be7c-catalog-content\") pod \"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c\" (UID: \"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c\") " Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.087691 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cbce9fd-ce3a-4fca-bfea-5a837d39be7c-utilities" (OuterVolumeSpecName: "utilities") pod "9cbce9fd-ce3a-4fca-bfea-5a837d39be7c" (UID: "9cbce9fd-ce3a-4fca-bfea-5a837d39be7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.092485 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cbce9fd-ce3a-4fca-bfea-5a837d39be7c-kube-api-access-dr96b" (OuterVolumeSpecName: "kube-api-access-dr96b") pod "9cbce9fd-ce3a-4fca-bfea-5a837d39be7c" (UID: "9cbce9fd-ce3a-4fca-bfea-5a837d39be7c"). InnerVolumeSpecName "kube-api-access-dr96b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.136882 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cbce9fd-ce3a-4fca-bfea-5a837d39be7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cbce9fd-ce3a-4fca-bfea-5a837d39be7c" (UID: "9cbce9fd-ce3a-4fca-bfea-5a837d39be7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.191824 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr96b\" (UniqueName: \"kubernetes.io/projected/9cbce9fd-ce3a-4fca-bfea-5a837d39be7c-kube-api-access-dr96b\") on node \"crc\" DevicePath \"\"" Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.191856 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cbce9fd-ce3a-4fca-bfea-5a837d39be7c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.191866 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cbce9fd-ce3a-4fca-bfea-5a837d39be7c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.494167 4937 generic.go:334] "Generic (PLEG): container finished" podID="9cbce9fd-ce3a-4fca-bfea-5a837d39be7c" containerID="58c0a6ac61826f2dc50b40f1a8e8c97c8f6504f3c29d7293108f00819ae9e66c" exitCode=0 Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.494218 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l68qm" event={"ID":"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c","Type":"ContainerDied","Data":"58c0a6ac61826f2dc50b40f1a8e8c97c8f6504f3c29d7293108f00819ae9e66c"} Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.494257 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l68qm" event={"ID":"9cbce9fd-ce3a-4fca-bfea-5a837d39be7c","Type":"ContainerDied","Data":"60771c2b2e3f284541c8a17fcf4d49d2214072283f06cac722793bffeff70154"} Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.494255 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l68qm" Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.494281 4937 scope.go:117] "RemoveContainer" containerID="58c0a6ac61826f2dc50b40f1a8e8c97c8f6504f3c29d7293108f00819ae9e66c" Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.523812 4937 scope.go:117] "RemoveContainer" containerID="8752b28bc77b0eae8378ffd975ee0e26168fdedb59a41463b93135ccbfcace94" Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.539166 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l68qm"] Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.545076 4937 scope.go:117] "RemoveContainer" containerID="5498d947a13e75799c71080484b4b1998fc72116d9201336608ad055ccdb9b1b" Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.548937 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l68qm"] Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.592714 4937 scope.go:117] "RemoveContainer" containerID="58c0a6ac61826f2dc50b40f1a8e8c97c8f6504f3c29d7293108f00819ae9e66c" Jan 23 08:22:19 crc kubenswrapper[4937]: E0123 08:22:19.593086 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58c0a6ac61826f2dc50b40f1a8e8c97c8f6504f3c29d7293108f00819ae9e66c\": container with ID starting with 58c0a6ac61826f2dc50b40f1a8e8c97c8f6504f3c29d7293108f00819ae9e66c not found: ID does not exist" containerID="58c0a6ac61826f2dc50b40f1a8e8c97c8f6504f3c29d7293108f00819ae9e66c" Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.593125 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58c0a6ac61826f2dc50b40f1a8e8c97c8f6504f3c29d7293108f00819ae9e66c"} err="failed to get container status \"58c0a6ac61826f2dc50b40f1a8e8c97c8f6504f3c29d7293108f00819ae9e66c\": rpc error: code = NotFound desc = could not find container \"58c0a6ac61826f2dc50b40f1a8e8c97c8f6504f3c29d7293108f00819ae9e66c\": container with ID starting with 58c0a6ac61826f2dc50b40f1a8e8c97c8f6504f3c29d7293108f00819ae9e66c not found: ID does not exist" Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.593152 4937 scope.go:117] "RemoveContainer" containerID="8752b28bc77b0eae8378ffd975ee0e26168fdedb59a41463b93135ccbfcace94" Jan 23 08:22:19 crc kubenswrapper[4937]: E0123 08:22:19.593814 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8752b28bc77b0eae8378ffd975ee0e26168fdedb59a41463b93135ccbfcace94\": container with ID starting with 8752b28bc77b0eae8378ffd975ee0e26168fdedb59a41463b93135ccbfcace94 not found: ID does not exist" containerID="8752b28bc77b0eae8378ffd975ee0e26168fdedb59a41463b93135ccbfcace94" Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.593847 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8752b28bc77b0eae8378ffd975ee0e26168fdedb59a41463b93135ccbfcace94"} err="failed to get container status \"8752b28bc77b0eae8378ffd975ee0e26168fdedb59a41463b93135ccbfcace94\": rpc error: code = NotFound desc = could not find container \"8752b28bc77b0eae8378ffd975ee0e26168fdedb59a41463b93135ccbfcace94\": container with ID starting with 8752b28bc77b0eae8378ffd975ee0e26168fdedb59a41463b93135ccbfcace94 not found: ID does not exist" Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.593868 4937 scope.go:117] "RemoveContainer" containerID="5498d947a13e75799c71080484b4b1998fc72116d9201336608ad055ccdb9b1b" Jan 23 08:22:19 crc kubenswrapper[4937]: E0123 08:22:19.594104 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5498d947a13e75799c71080484b4b1998fc72116d9201336608ad055ccdb9b1b\": container with ID starting with 5498d947a13e75799c71080484b4b1998fc72116d9201336608ad055ccdb9b1b not found: ID does not exist" containerID="5498d947a13e75799c71080484b4b1998fc72116d9201336608ad055ccdb9b1b" Jan 23 08:22:19 crc kubenswrapper[4937]: I0123 08:22:19.594135 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5498d947a13e75799c71080484b4b1998fc72116d9201336608ad055ccdb9b1b"} err="failed to get container status \"5498d947a13e75799c71080484b4b1998fc72116d9201336608ad055ccdb9b1b\": rpc error: code = NotFound desc = could not find container \"5498d947a13e75799c71080484b4b1998fc72116d9201336608ad055ccdb9b1b\": container with ID starting with 5498d947a13e75799c71080484b4b1998fc72116d9201336608ad055ccdb9b1b not found: ID does not exist" Jan 23 08:22:20 crc kubenswrapper[4937]: I0123 08:22:20.543749 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cbce9fd-ce3a-4fca-bfea-5a837d39be7c" path="/var/lib/kubelet/pods/9cbce9fd-ce3a-4fca-bfea-5a837d39be7c/volumes" Jan 23 08:22:37 crc kubenswrapper[4937]: I0123 08:22:37.724362 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:22:37 crc kubenswrapper[4937]: I0123 08:22:37.725014 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:22:37 crc kubenswrapper[4937]: I0123 08:22:37.725086 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 08:22:37 crc kubenswrapper[4937]: I0123 08:22:37.726017 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"66783b8294a23c431916f9da9652fe4cd266dcea80d119e56b082c7f71b7ed3e"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:22:37 crc kubenswrapper[4937]: I0123 08:22:37.726089 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://66783b8294a23c431916f9da9652fe4cd266dcea80d119e56b082c7f71b7ed3e" gracePeriod=600 Jan 23 08:22:38 crc kubenswrapper[4937]: I0123 08:22:38.667178 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="66783b8294a23c431916f9da9652fe4cd266dcea80d119e56b082c7f71b7ed3e" exitCode=0 Jan 23 08:22:38 crc kubenswrapper[4937]: I0123 08:22:38.667233 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"66783b8294a23c431916f9da9652fe4cd266dcea80d119e56b082c7f71b7ed3e"} Jan 23 08:22:38 crc kubenswrapper[4937]: I0123 08:22:38.667478 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b"} Jan 23 08:22:38 crc kubenswrapper[4937]: I0123 08:22:38.667505 4937 scope.go:117] "RemoveContainer" containerID="8fd4e0645d21aa82f05918fc719cababcccdf2c56f49554d678dafdbc1df10ee" Jan 23 08:25:07 crc kubenswrapper[4937]: I0123 08:25:07.723920 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:25:07 crc kubenswrapper[4937]: I0123 08:25:07.724497 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:25:37 crc kubenswrapper[4937]: I0123 08:25:37.723670 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:25:37 crc kubenswrapper[4937]: I0123 08:25:37.725103 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:26:07 crc kubenswrapper[4937]: I0123 08:26:07.723810 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:26:07 crc kubenswrapper[4937]: I0123 08:26:07.724476 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:26:07 crc kubenswrapper[4937]: I0123 08:26:07.724537 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 08:26:07 crc kubenswrapper[4937]: I0123 08:26:07.725442 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:26:07 crc kubenswrapper[4937]: I0123 08:26:07.725518 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" gracePeriod=600 Jan 23 08:26:07 crc kubenswrapper[4937]: E0123 08:26:07.849783 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:26:07 crc kubenswrapper[4937]: I0123 08:26:07.915565 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" exitCode=0 Jan 23 08:26:07 crc kubenswrapper[4937]: I0123 08:26:07.915627 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b"} Jan 23 08:26:07 crc kubenswrapper[4937]: I0123 08:26:07.915670 4937 scope.go:117] "RemoveContainer" containerID="66783b8294a23c431916f9da9652fe4cd266dcea80d119e56b082c7f71b7ed3e" Jan 23 08:26:07 crc kubenswrapper[4937]: I0123 08:26:07.916878 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:26:07 crc kubenswrapper[4937]: E0123 08:26:07.917326 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:26:22 crc kubenswrapper[4937]: I0123 08:26:22.527170 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:26:22 crc kubenswrapper[4937]: E0123 08:26:22.528067 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:26:35 crc kubenswrapper[4937]: I0123 08:26:35.526564 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:26:35 crc kubenswrapper[4937]: E0123 08:26:35.527384 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:26:49 crc kubenswrapper[4937]: I0123 08:26:49.527697 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:26:49 crc kubenswrapper[4937]: E0123 08:26:49.528455 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:27:01 crc kubenswrapper[4937]: I0123 08:27:01.527262 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:27:01 crc kubenswrapper[4937]: E0123 08:27:01.528027 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:27:15 crc kubenswrapper[4937]: I0123 08:27:15.526810 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:27:15 crc kubenswrapper[4937]: E0123 08:27:15.527635 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:24.999545 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qjw56"] Jan 23 08:27:25 crc kubenswrapper[4937]: E0123 08:27:25.000951 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cbce9fd-ce3a-4fca-bfea-5a837d39be7c" containerName="extract-content" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.000969 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cbce9fd-ce3a-4fca-bfea-5a837d39be7c" containerName="extract-content" Jan 23 08:27:25 crc kubenswrapper[4937]: E0123 08:27:25.000984 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe53d62-c089-4443-9064-aceef9768c8a" containerName="registry-server" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.000994 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe53d62-c089-4443-9064-aceef9768c8a" containerName="registry-server" Jan 23 08:27:25 crc kubenswrapper[4937]: E0123 08:27:25.001020 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe53d62-c089-4443-9064-aceef9768c8a" containerName="extract-content" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.001028 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe53d62-c089-4443-9064-aceef9768c8a" containerName="extract-content" Jan 23 08:27:25 crc kubenswrapper[4937]: E0123 08:27:25.001043 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cbce9fd-ce3a-4fca-bfea-5a837d39be7c" containerName="registry-server" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.001050 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cbce9fd-ce3a-4fca-bfea-5a837d39be7c" containerName="registry-server" Jan 23 08:27:25 crc kubenswrapper[4937]: E0123 08:27:25.001077 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cbce9fd-ce3a-4fca-bfea-5a837d39be7c" containerName="extract-utilities" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.001086 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cbce9fd-ce3a-4fca-bfea-5a837d39be7c" containerName="extract-utilities" Jan 23 08:27:25 crc kubenswrapper[4937]: E0123 08:27:25.001107 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe53d62-c089-4443-9064-aceef9768c8a" containerName="extract-utilities" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.001115 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe53d62-c089-4443-9064-aceef9768c8a" containerName="extract-utilities" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.001381 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cbce9fd-ce3a-4fca-bfea-5a837d39be7c" containerName="registry-server" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.001401 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fe53d62-c089-4443-9064-aceef9768c8a" containerName="registry-server" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.003455 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qjw56" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.014729 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qjw56"] Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.088141 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a05800c-5c8b-443e-b4a5-d36701611412-utilities\") pod \"redhat-marketplace-qjw56\" (UID: \"4a05800c-5c8b-443e-b4a5-d36701611412\") " pod="openshift-marketplace/redhat-marketplace-qjw56" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.088237 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j99kv\" (UniqueName: \"kubernetes.io/projected/4a05800c-5c8b-443e-b4a5-d36701611412-kube-api-access-j99kv\") pod \"redhat-marketplace-qjw56\" (UID: \"4a05800c-5c8b-443e-b4a5-d36701611412\") " pod="openshift-marketplace/redhat-marketplace-qjw56" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.088444 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a05800c-5c8b-443e-b4a5-d36701611412-catalog-content\") pod \"redhat-marketplace-qjw56\" (UID: \"4a05800c-5c8b-443e-b4a5-d36701611412\") " pod="openshift-marketplace/redhat-marketplace-qjw56" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.206781 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a05800c-5c8b-443e-b4a5-d36701611412-catalog-content\") pod \"redhat-marketplace-qjw56\" (UID: \"4a05800c-5c8b-443e-b4a5-d36701611412\") " pod="openshift-marketplace/redhat-marketplace-qjw56" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.207314 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a05800c-5c8b-443e-b4a5-d36701611412-utilities\") pod \"redhat-marketplace-qjw56\" (UID: \"4a05800c-5c8b-443e-b4a5-d36701611412\") " pod="openshift-marketplace/redhat-marketplace-qjw56" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.207339 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a05800c-5c8b-443e-b4a5-d36701611412-catalog-content\") pod \"redhat-marketplace-qjw56\" (UID: \"4a05800c-5c8b-443e-b4a5-d36701611412\") " pod="openshift-marketplace/redhat-marketplace-qjw56" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.207800 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a05800c-5c8b-443e-b4a5-d36701611412-utilities\") pod \"redhat-marketplace-qjw56\" (UID: \"4a05800c-5c8b-443e-b4a5-d36701611412\") " pod="openshift-marketplace/redhat-marketplace-qjw56" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.208117 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j99kv\" (UniqueName: \"kubernetes.io/projected/4a05800c-5c8b-443e-b4a5-d36701611412-kube-api-access-j99kv\") pod \"redhat-marketplace-qjw56\" (UID: \"4a05800c-5c8b-443e-b4a5-d36701611412\") " pod="openshift-marketplace/redhat-marketplace-qjw56" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.236675 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j99kv\" (UniqueName: \"kubernetes.io/projected/4a05800c-5c8b-443e-b4a5-d36701611412-kube-api-access-j99kv\") pod \"redhat-marketplace-qjw56\" (UID: \"4a05800c-5c8b-443e-b4a5-d36701611412\") " pod="openshift-marketplace/redhat-marketplace-qjw56" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.328963 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qjw56" Jan 23 08:27:25 crc kubenswrapper[4937]: I0123 08:27:25.819681 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qjw56"] Jan 23 08:27:26 crc kubenswrapper[4937]: I0123 08:27:26.627497 4937 generic.go:334] "Generic (PLEG): container finished" podID="4a05800c-5c8b-443e-b4a5-d36701611412" containerID="623fbb7e1fd4af669dadf9f0112403e5fc4200e7df83f71de0d08f98093b4570" exitCode=0 Jan 23 08:27:26 crc kubenswrapper[4937]: I0123 08:27:26.627656 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qjw56" event={"ID":"4a05800c-5c8b-443e-b4a5-d36701611412","Type":"ContainerDied","Data":"623fbb7e1fd4af669dadf9f0112403e5fc4200e7df83f71de0d08f98093b4570"} Jan 23 08:27:26 crc kubenswrapper[4937]: I0123 08:27:26.627996 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qjw56" event={"ID":"4a05800c-5c8b-443e-b4a5-d36701611412","Type":"ContainerStarted","Data":"7695092f8a016f1fdfc3112b0bba2ddfea0efe3382c020a40a4750b92f3ff9f2"} Jan 23 08:27:26 crc kubenswrapper[4937]: I0123 08:27:26.630045 4937 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:27:27 crc kubenswrapper[4937]: I0123 08:27:27.639896 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qjw56" event={"ID":"4a05800c-5c8b-443e-b4a5-d36701611412","Type":"ContainerStarted","Data":"7c60f0e26b135b9657065c0adae8e0b5b318cf19bccbe4ff445723db35efe89f"} Jan 23 08:27:28 crc kubenswrapper[4937]: I0123 08:27:28.527190 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:27:28 crc kubenswrapper[4937]: E0123 08:27:28.527861 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:27:28 crc kubenswrapper[4937]: I0123 08:27:28.650453 4937 generic.go:334] "Generic (PLEG): container finished" podID="4a05800c-5c8b-443e-b4a5-d36701611412" containerID="7c60f0e26b135b9657065c0adae8e0b5b318cf19bccbe4ff445723db35efe89f" exitCode=0 Jan 23 08:27:28 crc kubenswrapper[4937]: I0123 08:27:28.650504 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qjw56" event={"ID":"4a05800c-5c8b-443e-b4a5-d36701611412","Type":"ContainerDied","Data":"7c60f0e26b135b9657065c0adae8e0b5b318cf19bccbe4ff445723db35efe89f"} Jan 23 08:27:29 crc kubenswrapper[4937]: I0123 08:27:29.661983 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qjw56" event={"ID":"4a05800c-5c8b-443e-b4a5-d36701611412","Type":"ContainerStarted","Data":"467625400a7787e4c082a7014859eab8a3b8213c608a76c44093872c2d0a1524"} Jan 23 08:27:29 crc kubenswrapper[4937]: I0123 08:27:29.682576 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qjw56" podStartSLOduration=3.204947098 podStartE2EDuration="5.682558356s" podCreationTimestamp="2026-01-23 08:27:24 +0000 UTC" firstStartedPulling="2026-01-23 08:27:26.629697393 +0000 UTC m=+6846.433464066" lastFinishedPulling="2026-01-23 08:27:29.107308681 +0000 UTC m=+6848.911075324" observedRunningTime="2026-01-23 08:27:29.680461489 +0000 UTC m=+6849.484228162" watchObservedRunningTime="2026-01-23 08:27:29.682558356 +0000 UTC m=+6849.486325009" Jan 23 08:27:35 crc kubenswrapper[4937]: I0123 08:27:35.329985 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qjw56" Jan 23 08:27:35 crc kubenswrapper[4937]: I0123 08:27:35.330571 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qjw56" Jan 23 08:27:35 crc kubenswrapper[4937]: I0123 08:27:35.376825 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qjw56" Jan 23 08:27:35 crc kubenswrapper[4937]: I0123 08:27:35.759532 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qjw56" Jan 23 08:27:38 crc kubenswrapper[4937]: I0123 08:27:38.622631 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qjw56"] Jan 23 08:27:38 crc kubenswrapper[4937]: I0123 08:27:38.623348 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qjw56" podUID="4a05800c-5c8b-443e-b4a5-d36701611412" containerName="registry-server" containerID="cri-o://467625400a7787e4c082a7014859eab8a3b8213c608a76c44093872c2d0a1524" gracePeriod=2 Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.128207 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qjw56" Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.195779 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j99kv\" (UniqueName: \"kubernetes.io/projected/4a05800c-5c8b-443e-b4a5-d36701611412-kube-api-access-j99kv\") pod \"4a05800c-5c8b-443e-b4a5-d36701611412\" (UID: \"4a05800c-5c8b-443e-b4a5-d36701611412\") " Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.195904 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a05800c-5c8b-443e-b4a5-d36701611412-utilities\") pod \"4a05800c-5c8b-443e-b4a5-d36701611412\" (UID: \"4a05800c-5c8b-443e-b4a5-d36701611412\") " Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.196094 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a05800c-5c8b-443e-b4a5-d36701611412-catalog-content\") pod \"4a05800c-5c8b-443e-b4a5-d36701611412\" (UID: \"4a05800c-5c8b-443e-b4a5-d36701611412\") " Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.196942 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a05800c-5c8b-443e-b4a5-d36701611412-utilities" (OuterVolumeSpecName: "utilities") pod "4a05800c-5c8b-443e-b4a5-d36701611412" (UID: "4a05800c-5c8b-443e-b4a5-d36701611412"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.202344 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a05800c-5c8b-443e-b4a5-d36701611412-kube-api-access-j99kv" (OuterVolumeSpecName: "kube-api-access-j99kv") pod "4a05800c-5c8b-443e-b4a5-d36701611412" (UID: "4a05800c-5c8b-443e-b4a5-d36701611412"). InnerVolumeSpecName "kube-api-access-j99kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.234000 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a05800c-5c8b-443e-b4a5-d36701611412-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a05800c-5c8b-443e-b4a5-d36701611412" (UID: "4a05800c-5c8b-443e-b4a5-d36701611412"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.298840 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a05800c-5c8b-443e-b4a5-d36701611412-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.298873 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j99kv\" (UniqueName: \"kubernetes.io/projected/4a05800c-5c8b-443e-b4a5-d36701611412-kube-api-access-j99kv\") on node \"crc\" DevicePath \"\"" Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.298884 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a05800c-5c8b-443e-b4a5-d36701611412-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.755782 4937 generic.go:334] "Generic (PLEG): container finished" podID="4a05800c-5c8b-443e-b4a5-d36701611412" containerID="467625400a7787e4c082a7014859eab8a3b8213c608a76c44093872c2d0a1524" exitCode=0 Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.755853 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qjw56" Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.755867 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qjw56" event={"ID":"4a05800c-5c8b-443e-b4a5-d36701611412","Type":"ContainerDied","Data":"467625400a7787e4c082a7014859eab8a3b8213c608a76c44093872c2d0a1524"} Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.755942 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qjw56" event={"ID":"4a05800c-5c8b-443e-b4a5-d36701611412","Type":"ContainerDied","Data":"7695092f8a016f1fdfc3112b0bba2ddfea0efe3382c020a40a4750b92f3ff9f2"} Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.755966 4937 scope.go:117] "RemoveContainer" containerID="467625400a7787e4c082a7014859eab8a3b8213c608a76c44093872c2d0a1524" Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.799485 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qjw56"] Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.812966 4937 scope.go:117] "RemoveContainer" containerID="7c60f0e26b135b9657065c0adae8e0b5b318cf19bccbe4ff445723db35efe89f" Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.816228 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qjw56"] Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.849790 4937 scope.go:117] "RemoveContainer" containerID="623fbb7e1fd4af669dadf9f0112403e5fc4200e7df83f71de0d08f98093b4570" Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.900669 4937 scope.go:117] "RemoveContainer" containerID="467625400a7787e4c082a7014859eab8a3b8213c608a76c44093872c2d0a1524" Jan 23 08:27:39 crc kubenswrapper[4937]: E0123 08:27:39.901398 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"467625400a7787e4c082a7014859eab8a3b8213c608a76c44093872c2d0a1524\": container with ID starting with 467625400a7787e4c082a7014859eab8a3b8213c608a76c44093872c2d0a1524 not found: ID does not exist" containerID="467625400a7787e4c082a7014859eab8a3b8213c608a76c44093872c2d0a1524" Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.901445 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"467625400a7787e4c082a7014859eab8a3b8213c608a76c44093872c2d0a1524"} err="failed to get container status \"467625400a7787e4c082a7014859eab8a3b8213c608a76c44093872c2d0a1524\": rpc error: code = NotFound desc = could not find container \"467625400a7787e4c082a7014859eab8a3b8213c608a76c44093872c2d0a1524\": container with ID starting with 467625400a7787e4c082a7014859eab8a3b8213c608a76c44093872c2d0a1524 not found: ID does not exist" Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.901475 4937 scope.go:117] "RemoveContainer" containerID="7c60f0e26b135b9657065c0adae8e0b5b318cf19bccbe4ff445723db35efe89f" Jan 23 08:27:39 crc kubenswrapper[4937]: E0123 08:27:39.901907 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c60f0e26b135b9657065c0adae8e0b5b318cf19bccbe4ff445723db35efe89f\": container with ID starting with 7c60f0e26b135b9657065c0adae8e0b5b318cf19bccbe4ff445723db35efe89f not found: ID does not exist" containerID="7c60f0e26b135b9657065c0adae8e0b5b318cf19bccbe4ff445723db35efe89f" Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.901951 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c60f0e26b135b9657065c0adae8e0b5b318cf19bccbe4ff445723db35efe89f"} err="failed to get container status \"7c60f0e26b135b9657065c0adae8e0b5b318cf19bccbe4ff445723db35efe89f\": rpc error: code = NotFound desc = could not find container \"7c60f0e26b135b9657065c0adae8e0b5b318cf19bccbe4ff445723db35efe89f\": container with ID starting with 7c60f0e26b135b9657065c0adae8e0b5b318cf19bccbe4ff445723db35efe89f not found: ID does not exist" Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.901979 4937 scope.go:117] "RemoveContainer" containerID="623fbb7e1fd4af669dadf9f0112403e5fc4200e7df83f71de0d08f98093b4570" Jan 23 08:27:39 crc kubenswrapper[4937]: E0123 08:27:39.902414 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"623fbb7e1fd4af669dadf9f0112403e5fc4200e7df83f71de0d08f98093b4570\": container with ID starting with 623fbb7e1fd4af669dadf9f0112403e5fc4200e7df83f71de0d08f98093b4570 not found: ID does not exist" containerID="623fbb7e1fd4af669dadf9f0112403e5fc4200e7df83f71de0d08f98093b4570" Jan 23 08:27:39 crc kubenswrapper[4937]: I0123 08:27:39.902496 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"623fbb7e1fd4af669dadf9f0112403e5fc4200e7df83f71de0d08f98093b4570"} err="failed to get container status \"623fbb7e1fd4af669dadf9f0112403e5fc4200e7df83f71de0d08f98093b4570\": rpc error: code = NotFound desc = could not find container \"623fbb7e1fd4af669dadf9f0112403e5fc4200e7df83f71de0d08f98093b4570\": container with ID starting with 623fbb7e1fd4af669dadf9f0112403e5fc4200e7df83f71de0d08f98093b4570 not found: ID does not exist" Jan 23 08:27:40 crc kubenswrapper[4937]: I0123 08:27:40.537961 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a05800c-5c8b-443e-b4a5-d36701611412" path="/var/lib/kubelet/pods/4a05800c-5c8b-443e-b4a5-d36701611412/volumes" Jan 23 08:27:42 crc kubenswrapper[4937]: I0123 08:27:42.527014 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:27:42 crc kubenswrapper[4937]: E0123 08:27:42.527690 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:27:55 crc kubenswrapper[4937]: I0123 08:27:55.527172 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:27:55 crc kubenswrapper[4937]: E0123 08:27:55.528133 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:28:06 crc kubenswrapper[4937]: I0123 08:28:06.527283 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:28:06 crc kubenswrapper[4937]: E0123 08:28:06.528490 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:28:20 crc kubenswrapper[4937]: I0123 08:28:20.540865 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:28:20 crc kubenswrapper[4937]: E0123 08:28:20.541857 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:28:34 crc kubenswrapper[4937]: I0123 08:28:34.526712 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:28:34 crc kubenswrapper[4937]: E0123 08:28:34.527548 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:28:46 crc kubenswrapper[4937]: I0123 08:28:46.526875 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:28:46 crc kubenswrapper[4937]: E0123 08:28:46.528105 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:28:57 crc kubenswrapper[4937]: I0123 08:28:57.527168 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:28:57 crc kubenswrapper[4937]: E0123 08:28:57.528016 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:29:12 crc kubenswrapper[4937]: I0123 08:29:12.531472 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:29:12 crc kubenswrapper[4937]: E0123 08:29:12.532280 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:29:26 crc kubenswrapper[4937]: I0123 08:29:26.527159 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:29:26 crc kubenswrapper[4937]: E0123 08:29:26.528189 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:29:39 crc kubenswrapper[4937]: I0123 08:29:39.527000 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:29:39 crc kubenswrapper[4937]: E0123 08:29:39.527737 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:29:51 crc kubenswrapper[4937]: I0123 08:29:51.526786 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:29:51 crc kubenswrapper[4937]: E0123 08:29:51.527497 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:30:00 crc kubenswrapper[4937]: I0123 08:30:00.158822 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485950-9txkw"] Jan 23 08:30:00 crc kubenswrapper[4937]: E0123 08:30:00.159954 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a05800c-5c8b-443e-b4a5-d36701611412" containerName="extract-content" Jan 23 08:30:00 crc kubenswrapper[4937]: I0123 08:30:00.159975 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a05800c-5c8b-443e-b4a5-d36701611412" containerName="extract-content" Jan 23 08:30:00 crc kubenswrapper[4937]: E0123 08:30:00.160008 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a05800c-5c8b-443e-b4a5-d36701611412" containerName="extract-utilities" Jan 23 08:30:00 crc kubenswrapper[4937]: I0123 08:30:00.160015 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a05800c-5c8b-443e-b4a5-d36701611412" containerName="extract-utilities" Jan 23 08:30:00 crc kubenswrapper[4937]: E0123 08:30:00.160040 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a05800c-5c8b-443e-b4a5-d36701611412" containerName="registry-server" Jan 23 08:30:00 crc kubenswrapper[4937]: I0123 08:30:00.160047 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a05800c-5c8b-443e-b4a5-d36701611412" containerName="registry-server" Jan 23 08:30:00 crc kubenswrapper[4937]: I0123 08:30:00.160291 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a05800c-5c8b-443e-b4a5-d36701611412" containerName="registry-server" Jan 23 08:30:00 crc kubenswrapper[4937]: I0123 08:30:00.161332 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-9txkw" Jan 23 08:30:00 crc kubenswrapper[4937]: I0123 08:30:00.169565 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485950-9txkw"] Jan 23 08:30:00 crc kubenswrapper[4937]: I0123 08:30:00.178862 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 08:30:00 crc kubenswrapper[4937]: I0123 08:30:00.179369 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 08:30:00 crc kubenswrapper[4937]: I0123 08:30:00.267057 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7129aaa-c0a2-4137-b920-43ce34d790a1-config-volume\") pod \"collect-profiles-29485950-9txkw\" (UID: \"b7129aaa-c0a2-4137-b920-43ce34d790a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-9txkw" Jan 23 08:30:00 crc kubenswrapper[4937]: I0123 08:30:00.267491 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7129aaa-c0a2-4137-b920-43ce34d790a1-secret-volume\") pod \"collect-profiles-29485950-9txkw\" (UID: \"b7129aaa-c0a2-4137-b920-43ce34d790a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-9txkw" Jan 23 08:30:00 crc kubenswrapper[4937]: I0123 08:30:00.267858 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd48l\" (UniqueName: \"kubernetes.io/projected/b7129aaa-c0a2-4137-b920-43ce34d790a1-kube-api-access-vd48l\") pod \"collect-profiles-29485950-9txkw\" (UID: \"b7129aaa-c0a2-4137-b920-43ce34d790a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-9txkw" Jan 23 08:30:00 crc kubenswrapper[4937]: I0123 08:30:00.370404 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7129aaa-c0a2-4137-b920-43ce34d790a1-secret-volume\") pod \"collect-profiles-29485950-9txkw\" (UID: \"b7129aaa-c0a2-4137-b920-43ce34d790a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-9txkw" Jan 23 08:30:00 crc kubenswrapper[4937]: I0123 08:30:00.370573 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd48l\" (UniqueName: \"kubernetes.io/projected/b7129aaa-c0a2-4137-b920-43ce34d790a1-kube-api-access-vd48l\") pod \"collect-profiles-29485950-9txkw\" (UID: \"b7129aaa-c0a2-4137-b920-43ce34d790a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-9txkw" Jan 23 08:30:00 crc kubenswrapper[4937]: I0123 08:30:00.370664 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7129aaa-c0a2-4137-b920-43ce34d790a1-config-volume\") pod \"collect-profiles-29485950-9txkw\" (UID: \"b7129aaa-c0a2-4137-b920-43ce34d790a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-9txkw" Jan 23 08:30:00 crc kubenswrapper[4937]: I0123 08:30:00.372076 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7129aaa-c0a2-4137-b920-43ce34d790a1-config-volume\") pod \"collect-profiles-29485950-9txkw\" (UID: \"b7129aaa-c0a2-4137-b920-43ce34d790a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-9txkw" Jan 23 08:30:00 crc kubenswrapper[4937]: I0123 08:30:00.376948 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7129aaa-c0a2-4137-b920-43ce34d790a1-secret-volume\") pod \"collect-profiles-29485950-9txkw\" (UID: \"b7129aaa-c0a2-4137-b920-43ce34d790a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-9txkw" Jan 23 08:30:00 crc kubenswrapper[4937]: I0123 08:30:00.391334 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd48l\" (UniqueName: \"kubernetes.io/projected/b7129aaa-c0a2-4137-b920-43ce34d790a1-kube-api-access-vd48l\") pod \"collect-profiles-29485950-9txkw\" (UID: \"b7129aaa-c0a2-4137-b920-43ce34d790a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-9txkw" Jan 23 08:30:00 crc kubenswrapper[4937]: I0123 08:30:00.493982 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-9txkw" Jan 23 08:30:01 crc kubenswrapper[4937]: I0123 08:30:01.004374 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485950-9txkw"] Jan 23 08:30:01 crc kubenswrapper[4937]: I0123 08:30:01.046891 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-9txkw" event={"ID":"b7129aaa-c0a2-4137-b920-43ce34d790a1","Type":"ContainerStarted","Data":"c9addd73ce091a71b2da477448d7b28697c511dd7714d23e5ecd8418fa4d8b8e"} Jan 23 08:30:01 crc kubenswrapper[4937]: E0123 08:30:01.953377 4937 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7129aaa_c0a2_4137_b920_43ce34d790a1.slice/crio-conmon-7d602f98e920527921f9b49a69fefe244759c6ae1b2741dc451839337b48e143.scope\": RecentStats: unable to find data in memory cache]" Jan 23 08:30:02 crc kubenswrapper[4937]: I0123 08:30:02.058799 4937 generic.go:334] "Generic (PLEG): container finished" podID="b7129aaa-c0a2-4137-b920-43ce34d790a1" containerID="7d602f98e920527921f9b49a69fefe244759c6ae1b2741dc451839337b48e143" exitCode=0 Jan 23 08:30:02 crc kubenswrapper[4937]: I0123 08:30:02.058842 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-9txkw" event={"ID":"b7129aaa-c0a2-4137-b920-43ce34d790a1","Type":"ContainerDied","Data":"7d602f98e920527921f9b49a69fefe244759c6ae1b2741dc451839337b48e143"} Jan 23 08:30:03 crc kubenswrapper[4937]: I0123 08:30:03.439464 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-9txkw" Jan 23 08:30:03 crc kubenswrapper[4937]: I0123 08:30:03.581838 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7129aaa-c0a2-4137-b920-43ce34d790a1-secret-volume\") pod \"b7129aaa-c0a2-4137-b920-43ce34d790a1\" (UID: \"b7129aaa-c0a2-4137-b920-43ce34d790a1\") " Jan 23 08:30:03 crc kubenswrapper[4937]: I0123 08:30:03.581955 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd48l\" (UniqueName: \"kubernetes.io/projected/b7129aaa-c0a2-4137-b920-43ce34d790a1-kube-api-access-vd48l\") pod \"b7129aaa-c0a2-4137-b920-43ce34d790a1\" (UID: \"b7129aaa-c0a2-4137-b920-43ce34d790a1\") " Jan 23 08:30:03 crc kubenswrapper[4937]: I0123 08:30:03.581993 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7129aaa-c0a2-4137-b920-43ce34d790a1-config-volume\") pod \"b7129aaa-c0a2-4137-b920-43ce34d790a1\" (UID: \"b7129aaa-c0a2-4137-b920-43ce34d790a1\") " Jan 23 08:30:03 crc kubenswrapper[4937]: I0123 08:30:03.583509 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7129aaa-c0a2-4137-b920-43ce34d790a1-config-volume" (OuterVolumeSpecName: "config-volume") pod "b7129aaa-c0a2-4137-b920-43ce34d790a1" (UID: "b7129aaa-c0a2-4137-b920-43ce34d790a1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:30:03 crc kubenswrapper[4937]: I0123 08:30:03.583902 4937 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7129aaa-c0a2-4137-b920-43ce34d790a1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:03 crc kubenswrapper[4937]: I0123 08:30:03.591575 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7129aaa-c0a2-4137-b920-43ce34d790a1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b7129aaa-c0a2-4137-b920-43ce34d790a1" (UID: "b7129aaa-c0a2-4137-b920-43ce34d790a1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:30:03 crc kubenswrapper[4937]: I0123 08:30:03.591838 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7129aaa-c0a2-4137-b920-43ce34d790a1-kube-api-access-vd48l" (OuterVolumeSpecName: "kube-api-access-vd48l") pod "b7129aaa-c0a2-4137-b920-43ce34d790a1" (UID: "b7129aaa-c0a2-4137-b920-43ce34d790a1"). InnerVolumeSpecName "kube-api-access-vd48l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:30:03 crc kubenswrapper[4937]: I0123 08:30:03.685641 4937 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7129aaa-c0a2-4137-b920-43ce34d790a1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:03 crc kubenswrapper[4937]: I0123 08:30:03.685980 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd48l\" (UniqueName: \"kubernetes.io/projected/b7129aaa-c0a2-4137-b920-43ce34d790a1-kube-api-access-vd48l\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:04 crc kubenswrapper[4937]: I0123 08:30:04.076362 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-9txkw" event={"ID":"b7129aaa-c0a2-4137-b920-43ce34d790a1","Type":"ContainerDied","Data":"c9addd73ce091a71b2da477448d7b28697c511dd7714d23e5ecd8418fa4d8b8e"} Jan 23 08:30:04 crc kubenswrapper[4937]: I0123 08:30:04.076400 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9addd73ce091a71b2da477448d7b28697c511dd7714d23e5ecd8418fa4d8b8e" Jan 23 08:30:04 crc kubenswrapper[4937]: I0123 08:30:04.076791 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-9txkw" Jan 23 08:30:04 crc kubenswrapper[4937]: I0123 08:30:04.519360 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k"] Jan 23 08:30:04 crc kubenswrapper[4937]: I0123 08:30:04.539223 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485905-dkz9k"] Jan 23 08:30:06 crc kubenswrapper[4937]: I0123 08:30:06.527497 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:30:06 crc kubenswrapper[4937]: E0123 08:30:06.528296 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:30:06 crc kubenswrapper[4937]: I0123 08:30:06.539760 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ece4f7f-8921-4578-aa47-537051933e2f" path="/var/lib/kubelet/pods/0ece4f7f-8921-4578-aa47-537051933e2f/volumes" Jan 23 08:30:10 crc kubenswrapper[4937]: I0123 08:30:10.562046 4937 scope.go:117] "RemoveContainer" containerID="467a380d99b57dc1da550ccb15c03ae637cba6ec4af49285310931fc549509fc" Jan 23 08:30:17 crc kubenswrapper[4937]: I0123 08:30:17.050073 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-g48rl/must-gather-t9dpk"] Jan 23 08:30:17 crc kubenswrapper[4937]: E0123 08:30:17.050883 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7129aaa-c0a2-4137-b920-43ce34d790a1" containerName="collect-profiles" Jan 23 08:30:17 crc kubenswrapper[4937]: I0123 08:30:17.050896 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7129aaa-c0a2-4137-b920-43ce34d790a1" containerName="collect-profiles" Jan 23 08:30:17 crc kubenswrapper[4937]: I0123 08:30:17.051083 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7129aaa-c0a2-4137-b920-43ce34d790a1" containerName="collect-profiles" Jan 23 08:30:17 crc kubenswrapper[4937]: I0123 08:30:17.052205 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g48rl/must-gather-t9dpk" Jan 23 08:30:17 crc kubenswrapper[4937]: I0123 08:30:17.054479 4937 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-g48rl"/"default-dockercfg-cxjn7" Jan 23 08:30:17 crc kubenswrapper[4937]: I0123 08:30:17.054742 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-g48rl"/"kube-root-ca.crt" Jan 23 08:30:17 crc kubenswrapper[4937]: I0123 08:30:17.059070 4937 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-g48rl"/"openshift-service-ca.crt" Jan 23 08:30:17 crc kubenswrapper[4937]: I0123 08:30:17.064137 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-g48rl/must-gather-t9dpk"] Jan 23 08:30:17 crc kubenswrapper[4937]: I0123 08:30:17.204317 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76zdb\" (UniqueName: \"kubernetes.io/projected/ab264037-efa1-4d05-9bc9-028d34f90a92-kube-api-access-76zdb\") pod \"must-gather-t9dpk\" (UID: \"ab264037-efa1-4d05-9bc9-028d34f90a92\") " pod="openshift-must-gather-g48rl/must-gather-t9dpk" Jan 23 08:30:17 crc kubenswrapper[4937]: I0123 08:30:17.204456 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ab264037-efa1-4d05-9bc9-028d34f90a92-must-gather-output\") pod \"must-gather-t9dpk\" (UID: \"ab264037-efa1-4d05-9bc9-028d34f90a92\") " pod="openshift-must-gather-g48rl/must-gather-t9dpk" Jan 23 08:30:17 crc kubenswrapper[4937]: I0123 08:30:17.307035 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ab264037-efa1-4d05-9bc9-028d34f90a92-must-gather-output\") pod \"must-gather-t9dpk\" (UID: \"ab264037-efa1-4d05-9bc9-028d34f90a92\") " pod="openshift-must-gather-g48rl/must-gather-t9dpk" Jan 23 08:30:17 crc kubenswrapper[4937]: I0123 08:30:17.307259 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76zdb\" (UniqueName: \"kubernetes.io/projected/ab264037-efa1-4d05-9bc9-028d34f90a92-kube-api-access-76zdb\") pod \"must-gather-t9dpk\" (UID: \"ab264037-efa1-4d05-9bc9-028d34f90a92\") " pod="openshift-must-gather-g48rl/must-gather-t9dpk" Jan 23 08:30:17 crc kubenswrapper[4937]: I0123 08:30:17.307574 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ab264037-efa1-4d05-9bc9-028d34f90a92-must-gather-output\") pod \"must-gather-t9dpk\" (UID: \"ab264037-efa1-4d05-9bc9-028d34f90a92\") " pod="openshift-must-gather-g48rl/must-gather-t9dpk" Jan 23 08:30:17 crc kubenswrapper[4937]: I0123 08:30:17.345980 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76zdb\" (UniqueName: \"kubernetes.io/projected/ab264037-efa1-4d05-9bc9-028d34f90a92-kube-api-access-76zdb\") pod \"must-gather-t9dpk\" (UID: \"ab264037-efa1-4d05-9bc9-028d34f90a92\") " pod="openshift-must-gather-g48rl/must-gather-t9dpk" Jan 23 08:30:17 crc kubenswrapper[4937]: I0123 08:30:17.375312 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g48rl/must-gather-t9dpk" Jan 23 08:30:17 crc kubenswrapper[4937]: I0123 08:30:17.947948 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-g48rl/must-gather-t9dpk"] Jan 23 08:30:17 crc kubenswrapper[4937]: W0123 08:30:17.964167 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab264037_efa1_4d05_9bc9_028d34f90a92.slice/crio-4ce2078609589d70baee5ba284db324cdafd63e8cc46798b13f5b9933e39a3ac WatchSource:0}: Error finding container 4ce2078609589d70baee5ba284db324cdafd63e8cc46798b13f5b9933e39a3ac: Status 404 returned error can't find the container with id 4ce2078609589d70baee5ba284db324cdafd63e8cc46798b13f5b9933e39a3ac Jan 23 08:30:18 crc kubenswrapper[4937]: I0123 08:30:18.215503 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g48rl/must-gather-t9dpk" event={"ID":"ab264037-efa1-4d05-9bc9-028d34f90a92","Type":"ContainerStarted","Data":"4ce2078609589d70baee5ba284db324cdafd63e8cc46798b13f5b9933e39a3ac"} Jan 23 08:30:18 crc kubenswrapper[4937]: I0123 08:30:18.526895 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:30:18 crc kubenswrapper[4937]: E0123 08:30:18.527195 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:30:27 crc kubenswrapper[4937]: I0123 08:30:27.305832 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g48rl/must-gather-t9dpk" event={"ID":"ab264037-efa1-4d05-9bc9-028d34f90a92","Type":"ContainerStarted","Data":"9d35fa85e84fe3a93361ce86d19b6450cd3f92bc806ef7ce424e138617c5926f"} Jan 23 08:30:27 crc kubenswrapper[4937]: I0123 08:30:27.306321 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g48rl/must-gather-t9dpk" event={"ID":"ab264037-efa1-4d05-9bc9-028d34f90a92","Type":"ContainerStarted","Data":"4c367ad23f817bd64dada5bb45ea39322615e1ec976884177ef427dfcd040ebb"} Jan 23 08:30:27 crc kubenswrapper[4937]: I0123 08:30:27.325101 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-g48rl/must-gather-t9dpk" podStartSLOduration=1.84734354 podStartE2EDuration="10.325079657s" podCreationTimestamp="2026-01-23 08:30:17 +0000 UTC" firstStartedPulling="2026-01-23 08:30:17.968675634 +0000 UTC m=+7017.772442287" lastFinishedPulling="2026-01-23 08:30:26.446411751 +0000 UTC m=+7026.250178404" observedRunningTime="2026-01-23 08:30:27.320935304 +0000 UTC m=+7027.124701957" watchObservedRunningTime="2026-01-23 08:30:27.325079657 +0000 UTC m=+7027.128846310" Jan 23 08:30:31 crc kubenswrapper[4937]: I0123 08:30:31.382615 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-g48rl/crc-debug-55mw5"] Jan 23 08:30:31 crc kubenswrapper[4937]: I0123 08:30:31.384381 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g48rl/crc-debug-55mw5" Jan 23 08:30:31 crc kubenswrapper[4937]: I0123 08:30:31.426110 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbkt2\" (UniqueName: \"kubernetes.io/projected/161b4538-8f29-4f6b-81f2-987ad7fe6ee9-kube-api-access-lbkt2\") pod \"crc-debug-55mw5\" (UID: \"161b4538-8f29-4f6b-81f2-987ad7fe6ee9\") " pod="openshift-must-gather-g48rl/crc-debug-55mw5" Jan 23 08:30:31 crc kubenswrapper[4937]: I0123 08:30:31.426194 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/161b4538-8f29-4f6b-81f2-987ad7fe6ee9-host\") pod \"crc-debug-55mw5\" (UID: \"161b4538-8f29-4f6b-81f2-987ad7fe6ee9\") " pod="openshift-must-gather-g48rl/crc-debug-55mw5" Jan 23 08:30:31 crc kubenswrapper[4937]: I0123 08:30:31.527381 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbkt2\" (UniqueName: \"kubernetes.io/projected/161b4538-8f29-4f6b-81f2-987ad7fe6ee9-kube-api-access-lbkt2\") pod \"crc-debug-55mw5\" (UID: \"161b4538-8f29-4f6b-81f2-987ad7fe6ee9\") " pod="openshift-must-gather-g48rl/crc-debug-55mw5" Jan 23 08:30:31 crc kubenswrapper[4937]: I0123 08:30:31.527458 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/161b4538-8f29-4f6b-81f2-987ad7fe6ee9-host\") pod \"crc-debug-55mw5\" (UID: \"161b4538-8f29-4f6b-81f2-987ad7fe6ee9\") " pod="openshift-must-gather-g48rl/crc-debug-55mw5" Jan 23 08:30:31 crc kubenswrapper[4937]: I0123 08:30:31.527544 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/161b4538-8f29-4f6b-81f2-987ad7fe6ee9-host\") pod \"crc-debug-55mw5\" (UID: \"161b4538-8f29-4f6b-81f2-987ad7fe6ee9\") " pod="openshift-must-gather-g48rl/crc-debug-55mw5" Jan 23 08:30:31 crc kubenswrapper[4937]: I0123 08:30:31.548469 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbkt2\" (UniqueName: \"kubernetes.io/projected/161b4538-8f29-4f6b-81f2-987ad7fe6ee9-kube-api-access-lbkt2\") pod \"crc-debug-55mw5\" (UID: \"161b4538-8f29-4f6b-81f2-987ad7fe6ee9\") " pod="openshift-must-gather-g48rl/crc-debug-55mw5" Jan 23 08:30:31 crc kubenswrapper[4937]: I0123 08:30:31.706029 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g48rl/crc-debug-55mw5" Jan 23 08:30:31 crc kubenswrapper[4937]: W0123 08:30:31.751086 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod161b4538_8f29_4f6b_81f2_987ad7fe6ee9.slice/crio-d0f55bdfc1a6d635c623e4020e9795cdca5a0a564944d62e4e3c76f596928dd5 WatchSource:0}: Error finding container d0f55bdfc1a6d635c623e4020e9795cdca5a0a564944d62e4e3c76f596928dd5: Status 404 returned error can't find the container with id d0f55bdfc1a6d635c623e4020e9795cdca5a0a564944d62e4e3c76f596928dd5 Jan 23 08:30:32 crc kubenswrapper[4937]: I0123 08:30:32.358816 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g48rl/crc-debug-55mw5" event={"ID":"161b4538-8f29-4f6b-81f2-987ad7fe6ee9","Type":"ContainerStarted","Data":"d0f55bdfc1a6d635c623e4020e9795cdca5a0a564944d62e4e3c76f596928dd5"} Jan 23 08:30:33 crc kubenswrapper[4937]: I0123 08:30:33.527153 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:30:33 crc kubenswrapper[4937]: E0123 08:30:33.527722 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:30:44 crc kubenswrapper[4937]: I0123 08:30:44.527147 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:30:44 crc kubenswrapper[4937]: E0123 08:30:44.528152 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:30:45 crc kubenswrapper[4937]: I0123 08:30:45.497635 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g48rl/crc-debug-55mw5" event={"ID":"161b4538-8f29-4f6b-81f2-987ad7fe6ee9","Type":"ContainerStarted","Data":"992d92b4b040e071125a47e5fef9b8c9ac1b4cb9d65d6c6a239fd556dc7ce29e"} Jan 23 08:30:45 crc kubenswrapper[4937]: I0123 08:30:45.525361 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-g48rl/crc-debug-55mw5" podStartSLOduration=1.298251024 podStartE2EDuration="14.525333842s" podCreationTimestamp="2026-01-23 08:30:31 +0000 UTC" firstStartedPulling="2026-01-23 08:30:31.753845799 +0000 UTC m=+7031.557612452" lastFinishedPulling="2026-01-23 08:30:44.980928617 +0000 UTC m=+7044.784695270" observedRunningTime="2026-01-23 08:30:45.515835194 +0000 UTC m=+7045.319601847" watchObservedRunningTime="2026-01-23 08:30:45.525333842 +0000 UTC m=+7045.329100495" Jan 23 08:30:55 crc kubenswrapper[4937]: I0123 08:30:55.527939 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:30:55 crc kubenswrapper[4937]: E0123 08:30:55.528740 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:31:06 crc kubenswrapper[4937]: I0123 08:31:06.528170 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:31:06 crc kubenswrapper[4937]: E0123 08:31:06.529161 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:31:18 crc kubenswrapper[4937]: I0123 08:31:18.527016 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:31:18 crc kubenswrapper[4937]: I0123 08:31:18.839039 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"a5edf7b95091b558c59a7f1a2a7be767f6ee4fa6b350a35d1b9e2c918a3e96d4"} Jan 23 08:31:32 crc kubenswrapper[4937]: I0123 08:31:32.957948 4937 generic.go:334] "Generic (PLEG): container finished" podID="161b4538-8f29-4f6b-81f2-987ad7fe6ee9" containerID="992d92b4b040e071125a47e5fef9b8c9ac1b4cb9d65d6c6a239fd556dc7ce29e" exitCode=0 Jan 23 08:31:32 crc kubenswrapper[4937]: I0123 08:31:32.958142 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g48rl/crc-debug-55mw5" event={"ID":"161b4538-8f29-4f6b-81f2-987ad7fe6ee9","Type":"ContainerDied","Data":"992d92b4b040e071125a47e5fef9b8c9ac1b4cb9d65d6c6a239fd556dc7ce29e"} Jan 23 08:31:34 crc kubenswrapper[4937]: I0123 08:31:34.106701 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g48rl/crc-debug-55mw5" Jan 23 08:31:34 crc kubenswrapper[4937]: I0123 08:31:34.120162 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbkt2\" (UniqueName: \"kubernetes.io/projected/161b4538-8f29-4f6b-81f2-987ad7fe6ee9-kube-api-access-lbkt2\") pod \"161b4538-8f29-4f6b-81f2-987ad7fe6ee9\" (UID: \"161b4538-8f29-4f6b-81f2-987ad7fe6ee9\") " Jan 23 08:31:34 crc kubenswrapper[4937]: I0123 08:31:34.120302 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/161b4538-8f29-4f6b-81f2-987ad7fe6ee9-host\") pod \"161b4538-8f29-4f6b-81f2-987ad7fe6ee9\" (UID: \"161b4538-8f29-4f6b-81f2-987ad7fe6ee9\") " Jan 23 08:31:34 crc kubenswrapper[4937]: I0123 08:31:34.120410 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/161b4538-8f29-4f6b-81f2-987ad7fe6ee9-host" (OuterVolumeSpecName: "host") pod "161b4538-8f29-4f6b-81f2-987ad7fe6ee9" (UID: "161b4538-8f29-4f6b-81f2-987ad7fe6ee9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:31:34 crc kubenswrapper[4937]: I0123 08:31:34.121251 4937 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/161b4538-8f29-4f6b-81f2-987ad7fe6ee9-host\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:34 crc kubenswrapper[4937]: I0123 08:31:34.132924 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/161b4538-8f29-4f6b-81f2-987ad7fe6ee9-kube-api-access-lbkt2" (OuterVolumeSpecName: "kube-api-access-lbkt2") pod "161b4538-8f29-4f6b-81f2-987ad7fe6ee9" (UID: "161b4538-8f29-4f6b-81f2-987ad7fe6ee9"). InnerVolumeSpecName "kube-api-access-lbkt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:31:34 crc kubenswrapper[4937]: I0123 08:31:34.143965 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-g48rl/crc-debug-55mw5"] Jan 23 08:31:34 crc kubenswrapper[4937]: I0123 08:31:34.158864 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-g48rl/crc-debug-55mw5"] Jan 23 08:31:34 crc kubenswrapper[4937]: I0123 08:31:34.223059 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbkt2\" (UniqueName: \"kubernetes.io/projected/161b4538-8f29-4f6b-81f2-987ad7fe6ee9-kube-api-access-lbkt2\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:34 crc kubenswrapper[4937]: I0123 08:31:34.541682 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="161b4538-8f29-4f6b-81f2-987ad7fe6ee9" path="/var/lib/kubelet/pods/161b4538-8f29-4f6b-81f2-987ad7fe6ee9/volumes" Jan 23 08:31:34 crc kubenswrapper[4937]: I0123 08:31:34.977870 4937 scope.go:117] "RemoveContainer" containerID="992d92b4b040e071125a47e5fef9b8c9ac1b4cb9d65d6c6a239fd556dc7ce29e" Jan 23 08:31:34 crc kubenswrapper[4937]: I0123 08:31:34.977986 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g48rl/crc-debug-55mw5" Jan 23 08:31:35 crc kubenswrapper[4937]: I0123 08:31:35.348645 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-g48rl/crc-debug-vqxx4"] Jan 23 08:31:35 crc kubenswrapper[4937]: E0123 08:31:35.349396 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161b4538-8f29-4f6b-81f2-987ad7fe6ee9" containerName="container-00" Jan 23 08:31:35 crc kubenswrapper[4937]: I0123 08:31:35.349412 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="161b4538-8f29-4f6b-81f2-987ad7fe6ee9" containerName="container-00" Jan 23 08:31:35 crc kubenswrapper[4937]: I0123 08:31:35.349677 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="161b4538-8f29-4f6b-81f2-987ad7fe6ee9" containerName="container-00" Jan 23 08:31:35 crc kubenswrapper[4937]: I0123 08:31:35.350495 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g48rl/crc-debug-vqxx4" Jan 23 08:31:35 crc kubenswrapper[4937]: I0123 08:31:35.447141 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tmsp\" (UniqueName: \"kubernetes.io/projected/ab99aedd-80b1-4796-be3d-9100873ba779-kube-api-access-6tmsp\") pod \"crc-debug-vqxx4\" (UID: \"ab99aedd-80b1-4796-be3d-9100873ba779\") " pod="openshift-must-gather-g48rl/crc-debug-vqxx4" Jan 23 08:31:35 crc kubenswrapper[4937]: I0123 08:31:35.447580 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab99aedd-80b1-4796-be3d-9100873ba779-host\") pod \"crc-debug-vqxx4\" (UID: \"ab99aedd-80b1-4796-be3d-9100873ba779\") " pod="openshift-must-gather-g48rl/crc-debug-vqxx4" Jan 23 08:31:35 crc kubenswrapper[4937]: I0123 08:31:35.550174 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab99aedd-80b1-4796-be3d-9100873ba779-host\") pod \"crc-debug-vqxx4\" (UID: \"ab99aedd-80b1-4796-be3d-9100873ba779\") " pod="openshift-must-gather-g48rl/crc-debug-vqxx4" Jan 23 08:31:35 crc kubenswrapper[4937]: I0123 08:31:35.550575 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tmsp\" (UniqueName: \"kubernetes.io/projected/ab99aedd-80b1-4796-be3d-9100873ba779-kube-api-access-6tmsp\") pod \"crc-debug-vqxx4\" (UID: \"ab99aedd-80b1-4796-be3d-9100873ba779\") " pod="openshift-must-gather-g48rl/crc-debug-vqxx4" Jan 23 08:31:35 crc kubenswrapper[4937]: I0123 08:31:35.550344 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab99aedd-80b1-4796-be3d-9100873ba779-host\") pod \"crc-debug-vqxx4\" (UID: \"ab99aedd-80b1-4796-be3d-9100873ba779\") " pod="openshift-must-gather-g48rl/crc-debug-vqxx4" Jan 23 08:31:35 crc kubenswrapper[4937]: I0123 08:31:35.569712 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tmsp\" (UniqueName: \"kubernetes.io/projected/ab99aedd-80b1-4796-be3d-9100873ba779-kube-api-access-6tmsp\") pod \"crc-debug-vqxx4\" (UID: \"ab99aedd-80b1-4796-be3d-9100873ba779\") " pod="openshift-must-gather-g48rl/crc-debug-vqxx4" Jan 23 08:31:35 crc kubenswrapper[4937]: I0123 08:31:35.669323 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g48rl/crc-debug-vqxx4" Jan 23 08:31:35 crc kubenswrapper[4937]: W0123 08:31:35.693438 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab99aedd_80b1_4796_be3d_9100873ba779.slice/crio-c38806e6fffca39b4c4b90d103128394f4ad5390cd9f908c9fd64aa08a0f28c4 WatchSource:0}: Error finding container c38806e6fffca39b4c4b90d103128394f4ad5390cd9f908c9fd64aa08a0f28c4: Status 404 returned error can't find the container with id c38806e6fffca39b4c4b90d103128394f4ad5390cd9f908c9fd64aa08a0f28c4 Jan 23 08:31:35 crc kubenswrapper[4937]: I0123 08:31:35.989494 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g48rl/crc-debug-vqxx4" event={"ID":"ab99aedd-80b1-4796-be3d-9100873ba779","Type":"ContainerStarted","Data":"c38806e6fffca39b4c4b90d103128394f4ad5390cd9f908c9fd64aa08a0f28c4"} Jan 23 08:31:37 crc kubenswrapper[4937]: I0123 08:31:37.001530 4937 generic.go:334] "Generic (PLEG): container finished" podID="ab99aedd-80b1-4796-be3d-9100873ba779" containerID="27f795b90e332bbbb886db307f5b9a1ece8b901f45b806a6ba9e385bddf9766b" exitCode=0 Jan 23 08:31:37 crc kubenswrapper[4937]: I0123 08:31:37.001578 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g48rl/crc-debug-vqxx4" event={"ID":"ab99aedd-80b1-4796-be3d-9100873ba779","Type":"ContainerDied","Data":"27f795b90e332bbbb886db307f5b9a1ece8b901f45b806a6ba9e385bddf9766b"} Jan 23 08:31:38 crc kubenswrapper[4937]: I0123 08:31:38.141661 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g48rl/crc-debug-vqxx4" Jan 23 08:31:38 crc kubenswrapper[4937]: I0123 08:31:38.307681 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab99aedd-80b1-4796-be3d-9100873ba779-host\") pod \"ab99aedd-80b1-4796-be3d-9100873ba779\" (UID: \"ab99aedd-80b1-4796-be3d-9100873ba779\") " Jan 23 08:31:38 crc kubenswrapper[4937]: I0123 08:31:38.307744 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tmsp\" (UniqueName: \"kubernetes.io/projected/ab99aedd-80b1-4796-be3d-9100873ba779-kube-api-access-6tmsp\") pod \"ab99aedd-80b1-4796-be3d-9100873ba779\" (UID: \"ab99aedd-80b1-4796-be3d-9100873ba779\") " Jan 23 08:31:38 crc kubenswrapper[4937]: I0123 08:31:38.307831 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab99aedd-80b1-4796-be3d-9100873ba779-host" (OuterVolumeSpecName: "host") pod "ab99aedd-80b1-4796-be3d-9100873ba779" (UID: "ab99aedd-80b1-4796-be3d-9100873ba779"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:31:38 crc kubenswrapper[4937]: I0123 08:31:38.308330 4937 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ab99aedd-80b1-4796-be3d-9100873ba779-host\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:38 crc kubenswrapper[4937]: I0123 08:31:38.327464 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab99aedd-80b1-4796-be3d-9100873ba779-kube-api-access-6tmsp" (OuterVolumeSpecName: "kube-api-access-6tmsp") pod "ab99aedd-80b1-4796-be3d-9100873ba779" (UID: "ab99aedd-80b1-4796-be3d-9100873ba779"). InnerVolumeSpecName "kube-api-access-6tmsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:31:38 crc kubenswrapper[4937]: I0123 08:31:38.410563 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tmsp\" (UniqueName: \"kubernetes.io/projected/ab99aedd-80b1-4796-be3d-9100873ba779-kube-api-access-6tmsp\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:39 crc kubenswrapper[4937]: I0123 08:31:39.046496 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g48rl/crc-debug-vqxx4" event={"ID":"ab99aedd-80b1-4796-be3d-9100873ba779","Type":"ContainerDied","Data":"c38806e6fffca39b4c4b90d103128394f4ad5390cd9f908c9fd64aa08a0f28c4"} Jan 23 08:31:39 crc kubenswrapper[4937]: I0123 08:31:39.046548 4937 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c38806e6fffca39b4c4b90d103128394f4ad5390cd9f908c9fd64aa08a0f28c4" Jan 23 08:31:39 crc kubenswrapper[4937]: I0123 08:31:39.046651 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g48rl/crc-debug-vqxx4" Jan 23 08:31:39 crc kubenswrapper[4937]: I0123 08:31:39.384219 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-g48rl/crc-debug-vqxx4"] Jan 23 08:31:39 crc kubenswrapper[4937]: I0123 08:31:39.419940 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-g48rl/crc-debug-vqxx4"] Jan 23 08:31:40 crc kubenswrapper[4937]: I0123 08:31:40.536492 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab99aedd-80b1-4796-be3d-9100873ba779" path="/var/lib/kubelet/pods/ab99aedd-80b1-4796-be3d-9100873ba779/volumes" Jan 23 08:31:40 crc kubenswrapper[4937]: I0123 08:31:40.537074 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-g48rl/crc-debug-qz7wj"] Jan 23 08:31:40 crc kubenswrapper[4937]: E0123 08:31:40.537431 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab99aedd-80b1-4796-be3d-9100873ba779" containerName="container-00" Jan 23 08:31:40 crc kubenswrapper[4937]: I0123 08:31:40.537454 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab99aedd-80b1-4796-be3d-9100873ba779" containerName="container-00" Jan 23 08:31:40 crc kubenswrapper[4937]: I0123 08:31:40.537714 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab99aedd-80b1-4796-be3d-9100873ba779" containerName="container-00" Jan 23 08:31:40 crc kubenswrapper[4937]: I0123 08:31:40.538404 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g48rl/crc-debug-qz7wj" Jan 23 08:31:40 crc kubenswrapper[4937]: I0123 08:31:40.662113 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz6wc\" (UniqueName: \"kubernetes.io/projected/08f1e051-27f9-4dfc-8c40-281d5e9ff197-kube-api-access-nz6wc\") pod \"crc-debug-qz7wj\" (UID: \"08f1e051-27f9-4dfc-8c40-281d5e9ff197\") " pod="openshift-must-gather-g48rl/crc-debug-qz7wj" Jan 23 08:31:40 crc kubenswrapper[4937]: I0123 08:31:40.662403 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08f1e051-27f9-4dfc-8c40-281d5e9ff197-host\") pod \"crc-debug-qz7wj\" (UID: \"08f1e051-27f9-4dfc-8c40-281d5e9ff197\") " pod="openshift-must-gather-g48rl/crc-debug-qz7wj" Jan 23 08:31:40 crc kubenswrapper[4937]: I0123 08:31:40.764835 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz6wc\" (UniqueName: \"kubernetes.io/projected/08f1e051-27f9-4dfc-8c40-281d5e9ff197-kube-api-access-nz6wc\") pod \"crc-debug-qz7wj\" (UID: \"08f1e051-27f9-4dfc-8c40-281d5e9ff197\") " pod="openshift-must-gather-g48rl/crc-debug-qz7wj" Jan 23 08:31:40 crc kubenswrapper[4937]: I0123 08:31:40.764981 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08f1e051-27f9-4dfc-8c40-281d5e9ff197-host\") pod \"crc-debug-qz7wj\" (UID: \"08f1e051-27f9-4dfc-8c40-281d5e9ff197\") " pod="openshift-must-gather-g48rl/crc-debug-qz7wj" Jan 23 08:31:40 crc kubenswrapper[4937]: I0123 08:31:40.765185 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08f1e051-27f9-4dfc-8c40-281d5e9ff197-host\") pod \"crc-debug-qz7wj\" (UID: \"08f1e051-27f9-4dfc-8c40-281d5e9ff197\") " pod="openshift-must-gather-g48rl/crc-debug-qz7wj" Jan 23 08:31:40 crc kubenswrapper[4937]: I0123 08:31:40.786161 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz6wc\" (UniqueName: \"kubernetes.io/projected/08f1e051-27f9-4dfc-8c40-281d5e9ff197-kube-api-access-nz6wc\") pod \"crc-debug-qz7wj\" (UID: \"08f1e051-27f9-4dfc-8c40-281d5e9ff197\") " pod="openshift-must-gather-g48rl/crc-debug-qz7wj" Jan 23 08:31:40 crc kubenswrapper[4937]: I0123 08:31:40.857176 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g48rl/crc-debug-qz7wj" Jan 23 08:31:41 crc kubenswrapper[4937]: I0123 08:31:41.067372 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g48rl/crc-debug-qz7wj" event={"ID":"08f1e051-27f9-4dfc-8c40-281d5e9ff197","Type":"ContainerStarted","Data":"c01c914c3e84ecc961f4023727066773915c4306e6579129cd9414aff50c4195"} Jan 23 08:31:42 crc kubenswrapper[4937]: I0123 08:31:42.075899 4937 generic.go:334] "Generic (PLEG): container finished" podID="08f1e051-27f9-4dfc-8c40-281d5e9ff197" containerID="32cc280133db9f94d2ec7b77dd6f4f57917ffe54eaa93b12f5c622a0ede82778" exitCode=0 Jan 23 08:31:42 crc kubenswrapper[4937]: I0123 08:31:42.075954 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g48rl/crc-debug-qz7wj" event={"ID":"08f1e051-27f9-4dfc-8c40-281d5e9ff197","Type":"ContainerDied","Data":"32cc280133db9f94d2ec7b77dd6f4f57917ffe54eaa93b12f5c622a0ede82778"} Jan 23 08:31:42 crc kubenswrapper[4937]: I0123 08:31:42.123106 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-g48rl/crc-debug-qz7wj"] Jan 23 08:31:42 crc kubenswrapper[4937]: I0123 08:31:42.135087 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-g48rl/crc-debug-qz7wj"] Jan 23 08:31:43 crc kubenswrapper[4937]: I0123 08:31:43.201769 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g48rl/crc-debug-qz7wj" Jan 23 08:31:43 crc kubenswrapper[4937]: I0123 08:31:43.316886 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz6wc\" (UniqueName: \"kubernetes.io/projected/08f1e051-27f9-4dfc-8c40-281d5e9ff197-kube-api-access-nz6wc\") pod \"08f1e051-27f9-4dfc-8c40-281d5e9ff197\" (UID: \"08f1e051-27f9-4dfc-8c40-281d5e9ff197\") " Jan 23 08:31:43 crc kubenswrapper[4937]: I0123 08:31:43.316941 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08f1e051-27f9-4dfc-8c40-281d5e9ff197-host\") pod \"08f1e051-27f9-4dfc-8c40-281d5e9ff197\" (UID: \"08f1e051-27f9-4dfc-8c40-281d5e9ff197\") " Jan 23 08:31:43 crc kubenswrapper[4937]: I0123 08:31:43.317394 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08f1e051-27f9-4dfc-8c40-281d5e9ff197-host" (OuterVolumeSpecName: "host") pod "08f1e051-27f9-4dfc-8c40-281d5e9ff197" (UID: "08f1e051-27f9-4dfc-8c40-281d5e9ff197"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:31:43 crc kubenswrapper[4937]: I0123 08:31:43.318067 4937 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08f1e051-27f9-4dfc-8c40-281d5e9ff197-host\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:43 crc kubenswrapper[4937]: I0123 08:31:43.328858 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08f1e051-27f9-4dfc-8c40-281d5e9ff197-kube-api-access-nz6wc" (OuterVolumeSpecName: "kube-api-access-nz6wc") pod "08f1e051-27f9-4dfc-8c40-281d5e9ff197" (UID: "08f1e051-27f9-4dfc-8c40-281d5e9ff197"). InnerVolumeSpecName "kube-api-access-nz6wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:31:43 crc kubenswrapper[4937]: I0123 08:31:43.419952 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz6wc\" (UniqueName: \"kubernetes.io/projected/08f1e051-27f9-4dfc-8c40-281d5e9ff197-kube-api-access-nz6wc\") on node \"crc\" DevicePath \"\"" Jan 23 08:31:44 crc kubenswrapper[4937]: I0123 08:31:44.095164 4937 scope.go:117] "RemoveContainer" containerID="32cc280133db9f94d2ec7b77dd6f4f57917ffe54eaa93b12f5c622a0ede82778" Jan 23 08:31:44 crc kubenswrapper[4937]: I0123 08:31:44.095195 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g48rl/crc-debug-qz7wj" Jan 23 08:31:44 crc kubenswrapper[4937]: I0123 08:31:44.538668 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08f1e051-27f9-4dfc-8c40-281d5e9ff197" path="/var/lib/kubelet/pods/08f1e051-27f9-4dfc-8c40-281d5e9ff197/volumes" Jan 23 08:32:08 crc kubenswrapper[4937]: I0123 08:32:08.387063 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-57c6ff4958-kv8rb_f894d76a-8583-49b5-b88d-19b8bf52081d/barbican-api/0.log" Jan 23 08:32:08 crc kubenswrapper[4937]: I0123 08:32:08.509784 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-57c6ff4958-kv8rb_f894d76a-8583-49b5-b88d-19b8bf52081d/barbican-api-log/0.log" Jan 23 08:32:08 crc kubenswrapper[4937]: I0123 08:32:08.725052 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7fb8d96c46-xq9qn_b1b9c727-287f-4b30-98d8-1706ca360e73/barbican-keystone-listener/0.log" Jan 23 08:32:08 crc kubenswrapper[4937]: I0123 08:32:08.765797 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7fb8d96c46-xq9qn_b1b9c727-287f-4b30-98d8-1706ca360e73/barbican-keystone-listener-log/0.log" Jan 23 08:32:08 crc kubenswrapper[4937]: I0123 08:32:08.820644 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-647886d85c-p2mdd_f7d32607-f131-4998-b179-f60612068c4a/barbican-worker/0.log" Jan 23 08:32:08 crc kubenswrapper[4937]: I0123 08:32:08.937538 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-647886d85c-p2mdd_f7d32607-f131-4998-b179-f60612068c4a/barbican-worker-log/0.log" Jan 23 08:32:09 crc kubenswrapper[4937]: I0123 08:32:09.069421 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vxxjz_00506369-00e0-43c2-be00-23b30a785c87/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 08:32:09 crc kubenswrapper[4937]: I0123 08:32:09.296955 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5ab05357-8ea2-47be-96c5-641abf53afe0/ceilometer-notification-agent/0.log" Jan 23 08:32:09 crc kubenswrapper[4937]: I0123 08:32:09.331384 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5ab05357-8ea2-47be-96c5-641abf53afe0/ceilometer-central-agent/0.log" Jan 23 08:32:09 crc kubenswrapper[4937]: I0123 08:32:09.344560 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5ab05357-8ea2-47be-96c5-641abf53afe0/proxy-httpd/0.log" Jan 23 08:32:09 crc kubenswrapper[4937]: I0123 08:32:09.374519 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5ab05357-8ea2-47be-96c5-641abf53afe0/sg-core/0.log" Jan 23 08:32:09 crc kubenswrapper[4937]: I0123 08:32:09.621777 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_045e6674-9717-4cee-960c-7d049e797f45/cinder-api-log/0.log" Jan 23 08:32:09 crc kubenswrapper[4937]: I0123 08:32:09.778486 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_045e6674-9717-4cee-960c-7d049e797f45/cinder-api/0.log" Jan 23 08:32:09 crc kubenswrapper[4937]: I0123 08:32:09.868046 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_8ce513c8-df23-4201-a86b-605c7b2ab636/probe/0.log" Jan 23 08:32:09 crc kubenswrapper[4937]: I0123 08:32:09.913062 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_8ce513c8-df23-4201-a86b-605c7b2ab636/cinder-backup/0.log" Jan 23 08:32:10 crc kubenswrapper[4937]: I0123 08:32:10.041178 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7/cinder-scheduler/0.log" Jan 23 08:32:10 crc kubenswrapper[4937]: I0123 08:32:10.106482 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_469f0c1e-e8bd-41ef-9d99-a4c83cc4beb7/probe/0.log" Jan 23 08:32:10 crc kubenswrapper[4937]: I0123 08:32:10.333150 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_6f5e0df9-7863-4a53-a050-dc572ec6bf8a/probe/0.log" Jan 23 08:32:10 crc kubenswrapper[4937]: I0123 08:32:10.358540 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_6f5e0df9-7863-4a53-a050-dc572ec6bf8a/cinder-volume/0.log" Jan 23 08:32:10 crc kubenswrapper[4937]: I0123 08:32:10.517772 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_23def76a-ea91-4c5f-ad7f-1370bc2e8dc4/cinder-volume/0.log" Jan 23 08:32:10 crc kubenswrapper[4937]: I0123 08:32:10.573642 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_23def76a-ea91-4c5f-ad7f-1370bc2e8dc4/probe/0.log" Jan 23 08:32:10 crc kubenswrapper[4937]: I0123 08:32:10.664321 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-wmxm9_ae86befe-47e5-4645-b432-24184f2ebca6/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 08:32:10 crc kubenswrapper[4937]: I0123 08:32:10.812263 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-45kn2_9ecb96e5-109f-44d6-8f2e-ea090aa26541/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 08:32:10 crc kubenswrapper[4937]: I0123 08:32:10.904702 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-86f4b56d69-9jng8_fed385bc-7605-4d39-916e-7afd43801da8/init/0.log" Jan 23 08:32:11 crc kubenswrapper[4937]: I0123 08:32:11.105316 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-86f4b56d69-9jng8_fed385bc-7605-4d39-916e-7afd43801da8/init/0.log" Jan 23 08:32:11 crc kubenswrapper[4937]: I0123 08:32:11.194020 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-hljnj_4f4b2ee7-d1d8-4a44-9ad2-1fdbb5beea89/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 08:32:11 crc kubenswrapper[4937]: I0123 08:32:11.335747 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-86f4b56d69-9jng8_fed385bc-7605-4d39-916e-7afd43801da8/dnsmasq-dns/0.log" Jan 23 08:32:11 crc kubenswrapper[4937]: I0123 08:32:11.391743 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_533d5390-298e-4c44-9e48-6dd56773abd7/glance-httpd/0.log" Jan 23 08:32:11 crc kubenswrapper[4937]: I0123 08:32:11.471053 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_533d5390-298e-4c44-9e48-6dd56773abd7/glance-log/0.log" Jan 23 08:32:11 crc kubenswrapper[4937]: I0123 08:32:11.586252 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d8b82ffa-578a-490a-8d72-637c6236e89d/glance-httpd/0.log" Jan 23 08:32:11 crc kubenswrapper[4937]: I0123 08:32:11.634712 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d8b82ffa-578a-490a-8d72-637c6236e89d/glance-log/0.log" Jan 23 08:32:11 crc kubenswrapper[4937]: I0123 08:32:11.889598 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57b9fd85d8-54qml_6ac78d23-24ea-411c-ba2e-714e3f3fb5d2/horizon/0.log" Jan 23 08:32:11 crc kubenswrapper[4937]: I0123 08:32:11.924495 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-8ptc4_cf409357-277f-4efa-b697-1dda40e6db83/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 08:32:12 crc kubenswrapper[4937]: I0123 08:32:12.432583 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-qt84f_d79d9a1c-a285-4b25-a57e-fbb3d462d65e/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 08:32:12 crc kubenswrapper[4937]: I0123 08:32:12.662397 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29485861-x5bhn_3da5bf42-46a2-45de-92b9-8276f573fdb0/keystone-cron/0.log" Jan 23 08:32:12 crc kubenswrapper[4937]: I0123 08:32:12.924980 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6ffdfcccc5-xjn5f_41d5735b-6774-456d-b664-15aafa43fac0/keystone-api/0.log" Jan 23 08:32:12 crc kubenswrapper[4937]: I0123 08:32:12.930340 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29485921-g5gmn_eb1979d8-d92d-41da-9fea-452cec7794fb/keystone-cron/0.log" Jan 23 08:32:13 crc kubenswrapper[4937]: I0123 08:32:13.090471 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_7d117739-79d1-4b7d-9b78-331dd0af4a9e/kube-state-metrics/0.log" Jan 23 08:32:13 crc kubenswrapper[4937]: I0123 08:32:13.123610 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57b9fd85d8-54qml_6ac78d23-24ea-411c-ba2e-714e3f3fb5d2/horizon-log/0.log" Jan 23 08:32:13 crc kubenswrapper[4937]: I0123 08:32:13.255163 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-86xtr_070d09b2-6b7b-4d86-976f-aafd5c706f42/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 08:32:13 crc kubenswrapper[4937]: I0123 08:32:13.654112 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6845797bf7-lmcfd_828c40cb-f3e5-48ce-ab59-f201b3e46f35/neutron-httpd/0.log" Jan 23 08:32:13 crc kubenswrapper[4937]: I0123 08:32:13.659086 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6845797bf7-lmcfd_828c40cb-f3e5-48ce-ab59-f201b3e46f35/neutron-api/0.log" Jan 23 08:32:13 crc kubenswrapper[4937]: I0123 08:32:13.668204 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-wn5gn_7a8db5a7-cbaf-4d33-84d8-de028d12baf7/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 08:32:14 crc kubenswrapper[4937]: I0123 08:32:14.026327 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_4665ea5d-7191-40c4-bd96-5c1b48cf97a2/memcached/0.log" Jan 23 08:32:14 crc kubenswrapper[4937]: I0123 08:32:14.397399 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_5ec145bc-8e86-4bd4-9741-f8f7512c5f3c/nova-cell0-conductor-conductor/0.log" Jan 23 08:32:14 crc kubenswrapper[4937]: I0123 08:32:14.558415 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_668f3ab2-5d61-4919-afa3-356a1a061499/nova-cell1-conductor-conductor/0.log" Jan 23 08:32:14 crc kubenswrapper[4937]: I0123 08:32:14.796343 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0e59322d-6b8f-4c12-923e-b008c85e99ef/nova-cell1-novncproxy-novncproxy/0.log" Jan 23 08:32:14 crc kubenswrapper[4937]: I0123 08:32:14.850553 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-cw5ns_d9e1570d-32bf-4347-a51b-1d88b1cc2ea7/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 08:32:14 crc kubenswrapper[4937]: I0123 08:32:14.875681 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_bbcf63c3-cd33-4ce7-92cd-78d3001b33dc/nova-api-log/0.log" Jan 23 08:32:15 crc kubenswrapper[4937]: I0123 08:32:15.080009 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_77e26d95-217b-443b-9c9c-ff2246f5aeb4/nova-metadata-log/0.log" Jan 23 08:32:15 crc kubenswrapper[4937]: I0123 08:32:15.158607 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_bbcf63c3-cd33-4ce7-92cd-78d3001b33dc/nova-api-api/0.log" Jan 23 08:32:15 crc kubenswrapper[4937]: I0123 08:32:15.448293 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f256fcd3-0094-4316-acac-5cc6424f12d0/mysql-bootstrap/0.log" Jan 23 08:32:15 crc kubenswrapper[4937]: I0123 08:32:15.507450 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_39fb1734-ce06-4cda-ae43-e5152113b20a/nova-scheduler-scheduler/0.log" Jan 23 08:32:15 crc kubenswrapper[4937]: I0123 08:32:15.620217 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f256fcd3-0094-4316-acac-5cc6424f12d0/mysql-bootstrap/0.log" Jan 23 08:32:15 crc kubenswrapper[4937]: I0123 08:32:15.646469 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f256fcd3-0094-4316-acac-5cc6424f12d0/galera/0.log" Jan 23 08:32:15 crc kubenswrapper[4937]: I0123 08:32:15.772047 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_cdd3a96c-6f65-4f39-b435-78f7ceed08b5/mysql-bootstrap/0.log" Jan 23 08:32:15 crc kubenswrapper[4937]: I0123 08:32:15.963465 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_cdd3a96c-6f65-4f39-b435-78f7ceed08b5/mysql-bootstrap/0.log" Jan 23 08:32:16 crc kubenswrapper[4937]: I0123 08:32:16.016405 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_b4d887c1-2f50-42c9-9adf-3f4fe512f399/openstackclient/0.log" Jan 23 08:32:16 crc kubenswrapper[4937]: I0123 08:32:16.025536 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_cdd3a96c-6f65-4f39-b435-78f7ceed08b5/galera/0.log" Jan 23 08:32:16 crc kubenswrapper[4937]: I0123 08:32:16.195799 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-fb8bs_f7c0166f-0553-4d86-bf1f-19bdcfaea146/ovn-controller/0.log" Jan 23 08:32:16 crc kubenswrapper[4937]: I0123 08:32:16.329948 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-x9lg6_3407cde4-142f-499c-95e0-22eb2c91ea92/openstack-network-exporter/0.log" Jan 23 08:32:16 crc kubenswrapper[4937]: I0123 08:32:16.558736 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-94t5t_0a871e3b-e711-4a88-9a1a-e9948d1ba9b9/ovsdb-server-init/0.log" Jan 23 08:32:16 crc kubenswrapper[4937]: I0123 08:32:16.748981 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-94t5t_0a871e3b-e711-4a88-9a1a-e9948d1ba9b9/ovsdb-server-init/0.log" Jan 23 08:32:16 crc kubenswrapper[4937]: I0123 08:32:16.916283 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-94t5t_0a871e3b-e711-4a88-9a1a-e9948d1ba9b9/ovsdb-server/0.log" Jan 23 08:32:16 crc kubenswrapper[4937]: I0123 08:32:16.968729 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-qjfw7_11934762-47de-4ed2-8554-a88cf1f34532/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 08:32:17 crc kubenswrapper[4937]: I0123 08:32:17.036197 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-94t5t_0a871e3b-e711-4a88-9a1a-e9948d1ba9b9/ovs-vswitchd/0.log" Jan 23 08:32:17 crc kubenswrapper[4937]: I0123 08:32:17.124517 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8abcdfaa-b5e9-416c-8c8c-91f424ee3c71/openstack-network-exporter/0.log" Jan 23 08:32:17 crc kubenswrapper[4937]: I0123 08:32:17.293780 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8abcdfaa-b5e9-416c-8c8c-91f424ee3c71/ovn-northd/0.log" Jan 23 08:32:17 crc kubenswrapper[4937]: I0123 08:32:17.349663 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1d5e690b-35ba-4305-b976-ede5fab8e117/ovsdbserver-nb/0.log" Jan 23 08:32:17 crc kubenswrapper[4937]: I0123 08:32:17.361769 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_77e26d95-217b-443b-9c9c-ff2246f5aeb4/nova-metadata-metadata/0.log" Jan 23 08:32:17 crc kubenswrapper[4937]: I0123 08:32:17.384801 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1d5e690b-35ba-4305-b976-ede5fab8e117/openstack-network-exporter/0.log" Jan 23 08:32:17 crc kubenswrapper[4937]: I0123 08:32:17.598697 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4fa3b925-02fe-4fbf-a441-98aaf94ed191/openstack-network-exporter/0.log" Jan 23 08:32:17 crc kubenswrapper[4937]: I0123 08:32:17.651802 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4fa3b925-02fe-4fbf-a441-98aaf94ed191/ovsdbserver-sb/0.log" Jan 23 08:32:17 crc kubenswrapper[4937]: I0123 08:32:17.696841 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-74dfd7457b-nnk7x_dcf766f7-3478-4448-a61b-8eff850eae70/placement-api/0.log" Jan 23 08:32:17 crc kubenswrapper[4937]: I0123 08:32:17.861650 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7d6faa78-d867-4969-8d8f-c97f2bd9f2de/init-config-reloader/0.log" Jan 23 08:32:17 crc kubenswrapper[4937]: I0123 08:32:17.903469 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-74dfd7457b-nnk7x_dcf766f7-3478-4448-a61b-8eff850eae70/placement-log/0.log" Jan 23 08:32:18 crc kubenswrapper[4937]: I0123 08:32:18.050453 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7d6faa78-d867-4969-8d8f-c97f2bd9f2de/prometheus/0.log" Jan 23 08:32:18 crc kubenswrapper[4937]: I0123 08:32:18.069606 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7d6faa78-d867-4969-8d8f-c97f2bd9f2de/init-config-reloader/0.log" Jan 23 08:32:18 crc kubenswrapper[4937]: I0123 08:32:18.097775 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7d6faa78-d867-4969-8d8f-c97f2bd9f2de/config-reloader/0.log" Jan 23 08:32:18 crc kubenswrapper[4937]: I0123 08:32:18.132133 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_7d6faa78-d867-4969-8d8f-c97f2bd9f2de/thanos-sidecar/0.log" Jan 23 08:32:18 crc kubenswrapper[4937]: I0123 08:32:18.225989 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_17f47dee-5fcc-4198-ae4c-85b851ae5b20/setup-container/0.log" Jan 23 08:32:18 crc kubenswrapper[4937]: I0123 08:32:18.389193 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_17f47dee-5fcc-4198-ae4c-85b851ae5b20/rabbitmq/0.log" Jan 23 08:32:18 crc kubenswrapper[4937]: I0123 08:32:18.439534 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_17f47dee-5fcc-4198-ae4c-85b851ae5b20/setup-container/0.log" Jan 23 08:32:18 crc kubenswrapper[4937]: I0123 08:32:18.488485 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e/setup-container/0.log" Jan 23 08:32:18 crc kubenswrapper[4937]: I0123 08:32:18.665811 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e/rabbitmq/0.log" Jan 23 08:32:18 crc kubenswrapper[4937]: I0123 08:32:18.681212 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_4f26efb9-1fb5-49cf-a9b1-077aa91f3e7e/setup-container/0.log" Jan 23 08:32:18 crc kubenswrapper[4937]: I0123 08:32:18.695871 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9dea64b9-318c-40db-8f2f-bd0e18587ddf/setup-container/0.log" Jan 23 08:32:18 crc kubenswrapper[4937]: I0123 08:32:18.847463 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9dea64b9-318c-40db-8f2f-bd0e18587ddf/setup-container/0.log" Jan 23 08:32:18 crc kubenswrapper[4937]: I0123 08:32:18.895931 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9dea64b9-318c-40db-8f2f-bd0e18587ddf/rabbitmq/0.log" Jan 23 08:32:19 crc kubenswrapper[4937]: I0123 08:32:19.128481 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-kcdr4_c096740b-e5ec-44ee-aba1-16567d10dd18/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 08:32:19 crc kubenswrapper[4937]: I0123 08:32:19.218138 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-mjvdx_f2dda152-b150-489b-a647-01f4a252be0a/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 08:32:19 crc kubenswrapper[4937]: I0123 08:32:19.300297 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-jpblz_62611df4-293b-4aea-8a00-cbcfc2ffdfaf/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 08:32:19 crc kubenswrapper[4937]: I0123 08:32:19.401241 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-tkjjb_1a60b146-0a1a-4ecb-a49a-cd6af7a60a45/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 08:32:19 crc kubenswrapper[4937]: I0123 08:32:19.537823 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-pzgmd_f028edf4-c095-4de0-9999-c7cec222593f/ssh-known-hosts-edpm-deployment/0.log" Jan 23 08:32:19 crc kubenswrapper[4937]: I0123 08:32:19.663123 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77f756c697-sr9wm_f843f7b9-df1a-4df3-b8e7-bf007d785f62/proxy-server/0.log" Jan 23 08:32:19 crc kubenswrapper[4937]: I0123 08:32:19.745986 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dq8lz_b9972df0-0d7d-4346-a77c-546a458a1677/swift-ring-rebalance/0.log" Jan 23 08:32:19 crc kubenswrapper[4937]: I0123 08:32:19.804207 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-77f756c697-sr9wm_f843f7b9-df1a-4df3-b8e7-bf007d785f62/proxy-httpd/0.log" Jan 23 08:32:19 crc kubenswrapper[4937]: I0123 08:32:19.915576 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e96a6620-5e97-4f3b-95b3-52c8b3161098/account-auditor/0.log" Jan 23 08:32:19 crc kubenswrapper[4937]: I0123 08:32:19.982023 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e96a6620-5e97-4f3b-95b3-52c8b3161098/account-replicator/0.log" Jan 23 08:32:20 crc kubenswrapper[4937]: I0123 08:32:20.005304 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e96a6620-5e97-4f3b-95b3-52c8b3161098/account-reaper/0.log" Jan 23 08:32:20 crc kubenswrapper[4937]: I0123 08:32:20.026330 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e96a6620-5e97-4f3b-95b3-52c8b3161098/account-server/0.log" Jan 23 08:32:20 crc kubenswrapper[4937]: I0123 08:32:20.089556 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e96a6620-5e97-4f3b-95b3-52c8b3161098/container-auditor/0.log" Jan 23 08:32:20 crc kubenswrapper[4937]: I0123 08:32:20.163950 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e96a6620-5e97-4f3b-95b3-52c8b3161098/container-replicator/0.log" Jan 23 08:32:20 crc kubenswrapper[4937]: I0123 08:32:20.214026 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e96a6620-5e97-4f3b-95b3-52c8b3161098/container-server/0.log" Jan 23 08:32:20 crc kubenswrapper[4937]: I0123 08:32:20.243452 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e96a6620-5e97-4f3b-95b3-52c8b3161098/object-auditor/0.log" Jan 23 08:32:20 crc kubenswrapper[4937]: I0123 08:32:20.274829 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e96a6620-5e97-4f3b-95b3-52c8b3161098/container-updater/0.log" Jan 23 08:32:20 crc kubenswrapper[4937]: I0123 08:32:20.351048 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e96a6620-5e97-4f3b-95b3-52c8b3161098/object-expirer/0.log" Jan 23 08:32:20 crc kubenswrapper[4937]: I0123 08:32:20.408439 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e96a6620-5e97-4f3b-95b3-52c8b3161098/object-server/0.log" Jan 23 08:32:20 crc kubenswrapper[4937]: I0123 08:32:20.421906 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e96a6620-5e97-4f3b-95b3-52c8b3161098/object-replicator/0.log" Jan 23 08:32:20 crc kubenswrapper[4937]: I0123 08:32:20.461864 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e96a6620-5e97-4f3b-95b3-52c8b3161098/object-updater/0.log" Jan 23 08:32:20 crc kubenswrapper[4937]: I0123 08:32:20.508303 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e96a6620-5e97-4f3b-95b3-52c8b3161098/rsync/0.log" Jan 23 08:32:20 crc kubenswrapper[4937]: I0123 08:32:20.600491 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e96a6620-5e97-4f3b-95b3-52c8b3161098/swift-recon-cron/0.log" Jan 23 08:32:20 crc kubenswrapper[4937]: I0123 08:32:20.662567 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-vmrjt_0d580b0b-d08e-4ffd-8a8d-e3a023d567ee/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 08:32:20 crc kubenswrapper[4937]: I0123 08:32:20.877335 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-q9cdb_68285833-1fa7-453c-a35e-197efebf176e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 08:32:20 crc kubenswrapper[4937]: I0123 08:32:20.903711 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_b3373921-706d-4d27-a1c5-b8aaa6179a0f/tempest-tests-tempest-tests-runner/0.log" Jan 23 08:32:21 crc kubenswrapper[4937]: I0123 08:32:21.488895 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_392cdd41-36d8-4d25-b7af-2b1b3f42e144/watcher-applier/0.log" Jan 23 08:32:22 crc kubenswrapper[4937]: I0123 08:32:22.047492 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_732dc8eb-7c57-435d-83f0-375b1a792dd7/watcher-api-log/0.log" Jan 23 08:32:23 crc kubenswrapper[4937]: I0123 08:32:23.283305 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_989f59fa-b14d-4d8d-949f-e4e397afdeba/watcher-decision-engine/0.log" Jan 23 08:32:25 crc kubenswrapper[4937]: I0123 08:32:25.612922 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_732dc8eb-7c57-435d-83f0-375b1a792dd7/watcher-api/0.log" Jan 23 08:32:40 crc kubenswrapper[4937]: I0123 08:32:40.132865 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cbmwf"] Jan 23 08:32:40 crc kubenswrapper[4937]: E0123 08:32:40.133766 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f1e051-27f9-4dfc-8c40-281d5e9ff197" containerName="container-00" Jan 23 08:32:40 crc kubenswrapper[4937]: I0123 08:32:40.133778 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f1e051-27f9-4dfc-8c40-281d5e9ff197" containerName="container-00" Jan 23 08:32:40 crc kubenswrapper[4937]: I0123 08:32:40.133988 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="08f1e051-27f9-4dfc-8c40-281d5e9ff197" containerName="container-00" Jan 23 08:32:40 crc kubenswrapper[4937]: I0123 08:32:40.135309 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbmwf" Jan 23 08:32:40 crc kubenswrapper[4937]: I0123 08:32:40.144708 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cbmwf"] Jan 23 08:32:40 crc kubenswrapper[4937]: I0123 08:32:40.329835 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6484157-a328-4ec8-b844-4518e7c7c097-utilities\") pod \"redhat-operators-cbmwf\" (UID: \"d6484157-a328-4ec8-b844-4518e7c7c097\") " pod="openshift-marketplace/redhat-operators-cbmwf" Jan 23 08:32:40 crc kubenswrapper[4937]: I0123 08:32:40.330225 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6484157-a328-4ec8-b844-4518e7c7c097-catalog-content\") pod \"redhat-operators-cbmwf\" (UID: \"d6484157-a328-4ec8-b844-4518e7c7c097\") " pod="openshift-marketplace/redhat-operators-cbmwf" Jan 23 08:32:40 crc kubenswrapper[4937]: I0123 08:32:40.330259 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s82s9\" (UniqueName: \"kubernetes.io/projected/d6484157-a328-4ec8-b844-4518e7c7c097-kube-api-access-s82s9\") pod \"redhat-operators-cbmwf\" (UID: \"d6484157-a328-4ec8-b844-4518e7c7c097\") " pod="openshift-marketplace/redhat-operators-cbmwf" Jan 23 08:32:40 crc kubenswrapper[4937]: I0123 08:32:40.431653 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6484157-a328-4ec8-b844-4518e7c7c097-utilities\") pod \"redhat-operators-cbmwf\" (UID: \"d6484157-a328-4ec8-b844-4518e7c7c097\") " pod="openshift-marketplace/redhat-operators-cbmwf" Jan 23 08:32:40 crc kubenswrapper[4937]: I0123 08:32:40.431721 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6484157-a328-4ec8-b844-4518e7c7c097-catalog-content\") pod \"redhat-operators-cbmwf\" (UID: \"d6484157-a328-4ec8-b844-4518e7c7c097\") " pod="openshift-marketplace/redhat-operators-cbmwf" Jan 23 08:32:40 crc kubenswrapper[4937]: I0123 08:32:40.431743 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s82s9\" (UniqueName: \"kubernetes.io/projected/d6484157-a328-4ec8-b844-4518e7c7c097-kube-api-access-s82s9\") pod \"redhat-operators-cbmwf\" (UID: \"d6484157-a328-4ec8-b844-4518e7c7c097\") " pod="openshift-marketplace/redhat-operators-cbmwf" Jan 23 08:32:40 crc kubenswrapper[4937]: I0123 08:32:40.432617 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6484157-a328-4ec8-b844-4518e7c7c097-utilities\") pod \"redhat-operators-cbmwf\" (UID: \"d6484157-a328-4ec8-b844-4518e7c7c097\") " pod="openshift-marketplace/redhat-operators-cbmwf" Jan 23 08:32:40 crc kubenswrapper[4937]: I0123 08:32:40.433061 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6484157-a328-4ec8-b844-4518e7c7c097-catalog-content\") pod \"redhat-operators-cbmwf\" (UID: \"d6484157-a328-4ec8-b844-4518e7c7c097\") " pod="openshift-marketplace/redhat-operators-cbmwf" Jan 23 08:32:40 crc kubenswrapper[4937]: I0123 08:32:40.451553 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s82s9\" (UniqueName: \"kubernetes.io/projected/d6484157-a328-4ec8-b844-4518e7c7c097-kube-api-access-s82s9\") pod \"redhat-operators-cbmwf\" (UID: \"d6484157-a328-4ec8-b844-4518e7c7c097\") " pod="openshift-marketplace/redhat-operators-cbmwf" Jan 23 08:32:40 crc kubenswrapper[4937]: I0123 08:32:40.478455 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbmwf" Jan 23 08:32:40 crc kubenswrapper[4937]: W0123 08:32:40.922691 4937 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6484157_a328_4ec8_b844_4518e7c7c097.slice/crio-42a6967a18494b3cf998382cb6d695ee8194328d3d35cf697a8b1a8c50b4d2eb WatchSource:0}: Error finding container 42a6967a18494b3cf998382cb6d695ee8194328d3d35cf697a8b1a8c50b4d2eb: Status 404 returned error can't find the container with id 42a6967a18494b3cf998382cb6d695ee8194328d3d35cf697a8b1a8c50b4d2eb Jan 23 08:32:40 crc kubenswrapper[4937]: I0123 08:32:40.932137 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cbmwf"] Jan 23 08:32:41 crc kubenswrapper[4937]: I0123 08:32:41.737142 4937 generic.go:334] "Generic (PLEG): container finished" podID="d6484157-a328-4ec8-b844-4518e7c7c097" containerID="c6135e50dbf3a9404d831b10ef53f19e90b52085289d2636302841609df454df" exitCode=0 Jan 23 08:32:41 crc kubenswrapper[4937]: I0123 08:32:41.737261 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbmwf" event={"ID":"d6484157-a328-4ec8-b844-4518e7c7c097","Type":"ContainerDied","Data":"c6135e50dbf3a9404d831b10ef53f19e90b52085289d2636302841609df454df"} Jan 23 08:32:41 crc kubenswrapper[4937]: I0123 08:32:41.737442 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbmwf" event={"ID":"d6484157-a328-4ec8-b844-4518e7c7c097","Type":"ContainerStarted","Data":"42a6967a18494b3cf998382cb6d695ee8194328d3d35cf697a8b1a8c50b4d2eb"} Jan 23 08:32:41 crc kubenswrapper[4937]: I0123 08:32:41.740866 4937 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:32:43 crc kubenswrapper[4937]: I0123 08:32:43.757747 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbmwf" event={"ID":"d6484157-a328-4ec8-b844-4518e7c7c097","Type":"ContainerStarted","Data":"f8577b58374fd098e779c43bc14947786622158058ced3cf8c23d2ef07ac208e"} Jan 23 08:32:48 crc kubenswrapper[4937]: I0123 08:32:48.807209 4937 generic.go:334] "Generic (PLEG): container finished" podID="d6484157-a328-4ec8-b844-4518e7c7c097" containerID="f8577b58374fd098e779c43bc14947786622158058ced3cf8c23d2ef07ac208e" exitCode=0 Jan 23 08:32:48 crc kubenswrapper[4937]: I0123 08:32:48.807418 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbmwf" event={"ID":"d6484157-a328-4ec8-b844-4518e7c7c097","Type":"ContainerDied","Data":"f8577b58374fd098e779c43bc14947786622158058ced3cf8c23d2ef07ac208e"} Jan 23 08:32:49 crc kubenswrapper[4937]: I0123 08:32:49.702897 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-qkqbb_62aa0d56-f0fe-4cc7-a5dd-b15b7471844d/manager/0.log" Jan 23 08:32:49 crc kubenswrapper[4937]: I0123 08:32:49.807327 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n_ffb38885-b679-49a3-9151-e2d6f4afaa8e/util/0.log" Jan 23 08:32:49 crc kubenswrapper[4937]: I0123 08:32:49.969890 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n_ffb38885-b679-49a3-9151-e2d6f4afaa8e/util/0.log" Jan 23 08:32:50 crc kubenswrapper[4937]: I0123 08:32:50.023686 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n_ffb38885-b679-49a3-9151-e2d6f4afaa8e/pull/0.log" Jan 23 08:32:50 crc kubenswrapper[4937]: I0123 08:32:50.023902 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n_ffb38885-b679-49a3-9151-e2d6f4afaa8e/pull/0.log" Jan 23 08:32:50 crc kubenswrapper[4937]: I0123 08:32:50.261557 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n_ffb38885-b679-49a3-9151-e2d6f4afaa8e/extract/0.log" Jan 23 08:32:50 crc kubenswrapper[4937]: I0123 08:32:50.263182 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n_ffb38885-b679-49a3-9151-e2d6f4afaa8e/util/0.log" Jan 23 08:32:50 crc kubenswrapper[4937]: I0123 08:32:50.272480 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c747c35b880481e7342d7af70fc51c6fc0ce0115d480627a9963eb5fe3zcw2n_ffb38885-b679-49a3-9151-e2d6f4afaa8e/pull/0.log" Jan 23 08:32:50 crc kubenswrapper[4937]: I0123 08:32:50.516068 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-k77l4_a0de6431-d5d9-46ec-a7bf-b4c3c999ba22/manager/0.log" Jan 23 08:32:50 crc kubenswrapper[4937]: I0123 08:32:50.525371 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-gtfpn_a97495b6-7a9f-454e-8197-af75abec2f3e/manager/0.log" Jan 23 08:32:50 crc kubenswrapper[4937]: I0123 08:32:50.772832 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-5l6fx_e1351960-51ec-4735-9019-d267f29568d5/manager/0.log" Jan 23 08:32:50 crc kubenswrapper[4937]: I0123 08:32:50.782841 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-4k4vj_166795a6-99d8-4030-89eb-7bdef35519dc/manager/0.log" Jan 23 08:32:50 crc kubenswrapper[4937]: I0123 08:32:50.834050 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbmwf" event={"ID":"d6484157-a328-4ec8-b844-4518e7c7c097","Type":"ContainerStarted","Data":"8b5152768f89aa9de306596c39935631a3328181a223d7eb093ec2762f0c7e5a"} Jan 23 08:32:50 crc kubenswrapper[4937]: I0123 08:32:50.852449 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cbmwf" podStartSLOduration=2.287203312 podStartE2EDuration="10.852412544s" podCreationTimestamp="2026-01-23 08:32:40 +0000 UTC" firstStartedPulling="2026-01-23 08:32:41.740555708 +0000 UTC m=+7161.544322361" lastFinishedPulling="2026-01-23 08:32:50.30576495 +0000 UTC m=+7170.109531593" observedRunningTime="2026-01-23 08:32:50.849451064 +0000 UTC m=+7170.653217717" watchObservedRunningTime="2026-01-23 08:32:50.852412544 +0000 UTC m=+7170.656179197" Jan 23 08:32:51 crc kubenswrapper[4937]: I0123 08:32:51.028742 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-hfslp_7e434a74-d86f-4d68-867f-bad41bdf53b5/manager/0.log" Jan 23 08:32:51 crc kubenswrapper[4937]: I0123 08:32:51.313169 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-9kphf_6c765e89-13b6-4588-a1a4-697b5553bdd0/manager/0.log" Jan 23 08:32:51 crc kubenswrapper[4937]: I0123 08:32:51.345296 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-c5c4f_b6e325d5-7535-41a5-a2a8-3f01fb4b8c0e/manager/0.log" Jan 23 08:32:51 crc kubenswrapper[4937]: I0123 08:32:51.397526 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-5bt2v_f255c056-65ce-42fc-9eb6-29395dcde9a3/manager/0.log" Jan 23 08:32:51 crc kubenswrapper[4937]: I0123 08:32:51.508642 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-xrgdc_7d322eb6-3116-4a70-8845-e62977302d86/manager/0.log" Jan 23 08:32:51 crc kubenswrapper[4937]: I0123 08:32:51.656465 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-2rqc7_77fd44ee-ceab-4595-890e-7310ec8b6cb2/manager/0.log" Jan 23 08:32:51 crc kubenswrapper[4937]: I0123 08:32:51.783952 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-sd8vq_f1cca66c-d0b5-488a-a3d4-9e0b1714c33c/manager/0.log" Jan 23 08:32:51 crc kubenswrapper[4937]: I0123 08:32:51.976416 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-wmhnh_9d773aa4-667d-431e-b63f-0f4c45d22d58/manager/0.log" Jan 23 08:32:52 crc kubenswrapper[4937]: I0123 08:32:52.042051 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-9xwkd_c8fbb575-36e1-452e-8800-9b310540b205/manager/0.log" Jan 23 08:32:52 crc kubenswrapper[4937]: I0123 08:32:52.200683 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8547lg9k_13a2ad28-4ba2-4470-8e0d-ca42de8e6653/manager/0.log" Jan 23 08:32:52 crc kubenswrapper[4937]: I0123 08:32:52.351704 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-58865b47f6-csc5n_b459659a-4585-4a0b-86ca-c8aa91b81445/operator/0.log" Jan 23 08:32:52 crc kubenswrapper[4937]: I0123 08:32:52.581181 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-k5qb6_2fb91dbd-bdfe-4e1d-b114-e4be54c52afc/registry-server/0.log" Jan 23 08:32:53 crc kubenswrapper[4937]: I0123 08:32:53.024329 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-4cdbv_7f7029d6-79d1-4698-91ca-bc61d66124ab/manager/0.log" Jan 23 08:32:53 crc kubenswrapper[4937]: I0123 08:32:53.031678 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-pxrfl_5ba7dbd8-68af-4677-bd8e-686c19912769/manager/0.log" Jan 23 08:32:53 crc kubenswrapper[4937]: I0123 08:32:53.376844 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7qvkg_a58eade1-a27e-42ed-8a1b-f43803d53498/operator/0.log" Jan 23 08:32:53 crc kubenswrapper[4937]: I0123 08:32:53.499950 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-qwg2k_c3c7df6a-2e6c-4a57-b7b6-ed070b4eeb3a/manager/0.log" Jan 23 08:32:53 crc kubenswrapper[4937]: I0123 08:32:53.915050 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-8599c9cdcc-j97fl_a7311b13-72db-4f12-9617-039ee018dee7/manager/0.log" Jan 23 08:32:53 crc kubenswrapper[4937]: I0123 08:32:53.951307 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-99bb7_75724f78-fc93-46a1-bfb2-037fe76b1edd/manager/0.log" Jan 23 08:32:54 crc kubenswrapper[4937]: I0123 08:32:54.122143 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-9q4dg_17b47330-4556-499e-83dd-7e67a9a73824/manager/0.log" Jan 23 08:32:54 crc kubenswrapper[4937]: I0123 08:32:54.208788 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5d9cd495bb-kw22f_23892b9d-9ef2-4c33-aaa0-1c858cd9255d/manager/0.log" Jan 23 08:33:00 crc kubenswrapper[4937]: I0123 08:33:00.482953 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cbmwf" Jan 23 08:33:00 crc kubenswrapper[4937]: I0123 08:33:00.483309 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cbmwf" Jan 23 08:33:00 crc kubenswrapper[4937]: I0123 08:33:00.538125 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cbmwf" Jan 23 08:33:00 crc kubenswrapper[4937]: I0123 08:33:00.988903 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cbmwf" Jan 23 08:33:01 crc kubenswrapper[4937]: I0123 08:33:01.049096 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cbmwf"] Jan 23 08:33:02 crc kubenswrapper[4937]: I0123 08:33:02.951190 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cbmwf" podUID="d6484157-a328-4ec8-b844-4518e7c7c097" containerName="registry-server" containerID="cri-o://8b5152768f89aa9de306596c39935631a3328181a223d7eb093ec2762f0c7e5a" gracePeriod=2 Jan 23 08:33:03 crc kubenswrapper[4937]: I0123 08:33:03.556790 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbmwf" Jan 23 08:33:03 crc kubenswrapper[4937]: I0123 08:33:03.625735 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6484157-a328-4ec8-b844-4518e7c7c097-utilities\") pod \"d6484157-a328-4ec8-b844-4518e7c7c097\" (UID: \"d6484157-a328-4ec8-b844-4518e7c7c097\") " Jan 23 08:33:03 crc kubenswrapper[4937]: I0123 08:33:03.626032 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6484157-a328-4ec8-b844-4518e7c7c097-catalog-content\") pod \"d6484157-a328-4ec8-b844-4518e7c7c097\" (UID: \"d6484157-a328-4ec8-b844-4518e7c7c097\") " Jan 23 08:33:03 crc kubenswrapper[4937]: I0123 08:33:03.626072 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s82s9\" (UniqueName: \"kubernetes.io/projected/d6484157-a328-4ec8-b844-4518e7c7c097-kube-api-access-s82s9\") pod \"d6484157-a328-4ec8-b844-4518e7c7c097\" (UID: \"d6484157-a328-4ec8-b844-4518e7c7c097\") " Jan 23 08:33:03 crc kubenswrapper[4937]: I0123 08:33:03.626469 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6484157-a328-4ec8-b844-4518e7c7c097-utilities" (OuterVolumeSpecName: "utilities") pod "d6484157-a328-4ec8-b844-4518e7c7c097" (UID: "d6484157-a328-4ec8-b844-4518e7c7c097"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:33:03 crc kubenswrapper[4937]: I0123 08:33:03.627064 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6484157-a328-4ec8-b844-4518e7c7c097-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:33:03 crc kubenswrapper[4937]: I0123 08:33:03.632426 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6484157-a328-4ec8-b844-4518e7c7c097-kube-api-access-s82s9" (OuterVolumeSpecName: "kube-api-access-s82s9") pod "d6484157-a328-4ec8-b844-4518e7c7c097" (UID: "d6484157-a328-4ec8-b844-4518e7c7c097"). InnerVolumeSpecName "kube-api-access-s82s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:33:03 crc kubenswrapper[4937]: I0123 08:33:03.729494 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s82s9\" (UniqueName: \"kubernetes.io/projected/d6484157-a328-4ec8-b844-4518e7c7c097-kube-api-access-s82s9\") on node \"crc\" DevicePath \"\"" Jan 23 08:33:03 crc kubenswrapper[4937]: I0123 08:33:03.760530 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6484157-a328-4ec8-b844-4518e7c7c097-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6484157-a328-4ec8-b844-4518e7c7c097" (UID: "d6484157-a328-4ec8-b844-4518e7c7c097"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:33:03 crc kubenswrapper[4937]: I0123 08:33:03.831334 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6484157-a328-4ec8-b844-4518e7c7c097-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:33:03 crc kubenswrapper[4937]: I0123 08:33:03.965823 4937 generic.go:334] "Generic (PLEG): container finished" podID="d6484157-a328-4ec8-b844-4518e7c7c097" containerID="8b5152768f89aa9de306596c39935631a3328181a223d7eb093ec2762f0c7e5a" exitCode=0 Jan 23 08:33:03 crc kubenswrapper[4937]: I0123 08:33:03.965870 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbmwf" event={"ID":"d6484157-a328-4ec8-b844-4518e7c7c097","Type":"ContainerDied","Data":"8b5152768f89aa9de306596c39935631a3328181a223d7eb093ec2762f0c7e5a"} Jan 23 08:33:03 crc kubenswrapper[4937]: I0123 08:33:03.965899 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbmwf" event={"ID":"d6484157-a328-4ec8-b844-4518e7c7c097","Type":"ContainerDied","Data":"42a6967a18494b3cf998382cb6d695ee8194328d3d35cf697a8b1a8c50b4d2eb"} Jan 23 08:33:03 crc kubenswrapper[4937]: I0123 08:33:03.965917 4937 scope.go:117] "RemoveContainer" containerID="8b5152768f89aa9de306596c39935631a3328181a223d7eb093ec2762f0c7e5a" Jan 23 08:33:03 crc kubenswrapper[4937]: I0123 08:33:03.966106 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbmwf" Jan 23 08:33:04 crc kubenswrapper[4937]: I0123 08:33:04.001411 4937 scope.go:117] "RemoveContainer" containerID="f8577b58374fd098e779c43bc14947786622158058ced3cf8c23d2ef07ac208e" Jan 23 08:33:04 crc kubenswrapper[4937]: I0123 08:33:04.020602 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cbmwf"] Jan 23 08:33:04 crc kubenswrapper[4937]: I0123 08:33:04.031308 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cbmwf"] Jan 23 08:33:04 crc kubenswrapper[4937]: I0123 08:33:04.063749 4937 scope.go:117] "RemoveContainer" containerID="c6135e50dbf3a9404d831b10ef53f19e90b52085289d2636302841609df454df" Jan 23 08:33:04 crc kubenswrapper[4937]: I0123 08:33:04.089179 4937 scope.go:117] "RemoveContainer" containerID="8b5152768f89aa9de306596c39935631a3328181a223d7eb093ec2762f0c7e5a" Jan 23 08:33:04 crc kubenswrapper[4937]: E0123 08:33:04.089695 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b5152768f89aa9de306596c39935631a3328181a223d7eb093ec2762f0c7e5a\": container with ID starting with 8b5152768f89aa9de306596c39935631a3328181a223d7eb093ec2762f0c7e5a not found: ID does not exist" containerID="8b5152768f89aa9de306596c39935631a3328181a223d7eb093ec2762f0c7e5a" Jan 23 08:33:04 crc kubenswrapper[4937]: I0123 08:33:04.089733 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b5152768f89aa9de306596c39935631a3328181a223d7eb093ec2762f0c7e5a"} err="failed to get container status \"8b5152768f89aa9de306596c39935631a3328181a223d7eb093ec2762f0c7e5a\": rpc error: code = NotFound desc = could not find container \"8b5152768f89aa9de306596c39935631a3328181a223d7eb093ec2762f0c7e5a\": container with ID starting with 8b5152768f89aa9de306596c39935631a3328181a223d7eb093ec2762f0c7e5a not found: ID does not exist" Jan 23 08:33:04 crc kubenswrapper[4937]: I0123 08:33:04.089764 4937 scope.go:117] "RemoveContainer" containerID="f8577b58374fd098e779c43bc14947786622158058ced3cf8c23d2ef07ac208e" Jan 23 08:33:04 crc kubenswrapper[4937]: E0123 08:33:04.090033 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8577b58374fd098e779c43bc14947786622158058ced3cf8c23d2ef07ac208e\": container with ID starting with f8577b58374fd098e779c43bc14947786622158058ced3cf8c23d2ef07ac208e not found: ID does not exist" containerID="f8577b58374fd098e779c43bc14947786622158058ced3cf8c23d2ef07ac208e" Jan 23 08:33:04 crc kubenswrapper[4937]: I0123 08:33:04.090062 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8577b58374fd098e779c43bc14947786622158058ced3cf8c23d2ef07ac208e"} err="failed to get container status \"f8577b58374fd098e779c43bc14947786622158058ced3cf8c23d2ef07ac208e\": rpc error: code = NotFound desc = could not find container \"f8577b58374fd098e779c43bc14947786622158058ced3cf8c23d2ef07ac208e\": container with ID starting with f8577b58374fd098e779c43bc14947786622158058ced3cf8c23d2ef07ac208e not found: ID does not exist" Jan 23 08:33:04 crc kubenswrapper[4937]: I0123 08:33:04.090080 4937 scope.go:117] "RemoveContainer" containerID="c6135e50dbf3a9404d831b10ef53f19e90b52085289d2636302841609df454df" Jan 23 08:33:04 crc kubenswrapper[4937]: E0123 08:33:04.090959 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6135e50dbf3a9404d831b10ef53f19e90b52085289d2636302841609df454df\": container with ID starting with c6135e50dbf3a9404d831b10ef53f19e90b52085289d2636302841609df454df not found: ID does not exist" containerID="c6135e50dbf3a9404d831b10ef53f19e90b52085289d2636302841609df454df" Jan 23 08:33:04 crc kubenswrapper[4937]: I0123 08:33:04.090991 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6135e50dbf3a9404d831b10ef53f19e90b52085289d2636302841609df454df"} err="failed to get container status \"c6135e50dbf3a9404d831b10ef53f19e90b52085289d2636302841609df454df\": rpc error: code = NotFound desc = could not find container \"c6135e50dbf3a9404d831b10ef53f19e90b52085289d2636302841609df454df\": container with ID starting with c6135e50dbf3a9404d831b10ef53f19e90b52085289d2636302841609df454df not found: ID does not exist" Jan 23 08:33:04 crc kubenswrapper[4937]: I0123 08:33:04.539041 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6484157-a328-4ec8-b844-4518e7c7c097" path="/var/lib/kubelet/pods/d6484157-a328-4ec8-b844-4518e7c7c097/volumes" Jan 23 08:33:13 crc kubenswrapper[4937]: I0123 08:33:13.058016 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-bbtnp_3168738e-e0e3-43d7-bae7-79276263bb8e/control-plane-machine-set-operator/0.log" Jan 23 08:33:13 crc kubenswrapper[4937]: I0123 08:33:13.264934 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wf8j9_d59eeb02-0c89-4608-98c6-78a5b88cdd5c/kube-rbac-proxy/0.log" Jan 23 08:33:13 crc kubenswrapper[4937]: I0123 08:33:13.306728 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-wf8j9_d59eeb02-0c89-4608-98c6-78a5b88cdd5c/machine-api-operator/0.log" Jan 23 08:33:27 crc kubenswrapper[4937]: I0123 08:33:27.390771 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-6frw2_a2720484-f07d-45fe-8acd-54191c11123f/cert-manager-controller/0.log" Jan 23 08:33:27 crc kubenswrapper[4937]: I0123 08:33:27.757831 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-8vpjz_2a7bd38b-fde5-4a38-bae8-c72a44172d4e/cert-manager-cainjector/0.log" Jan 23 08:33:27 crc kubenswrapper[4937]: I0123 08:33:27.804685 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-96ttx_ca58b1fb-629d-412a-9b10-11a58e9a82ab/cert-manager-webhook/0.log" Jan 23 08:33:37 crc kubenswrapper[4937]: I0123 08:33:37.723829 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:33:37 crc kubenswrapper[4937]: I0123 08:33:37.724423 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:33:41 crc kubenswrapper[4937]: I0123 08:33:41.722505 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-j4rhv_3be187b8-4f0a-4298-bfd6-e03586c755ef/nmstate-console-plugin/0.log" Jan 23 08:33:41 crc kubenswrapper[4937]: I0123 08:33:41.987374 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tsh7l_ecd476d2-4fe0-48df-8e5a-97ebe2c5cb78/nmstate-handler/0.log" Jan 23 08:33:42 crc kubenswrapper[4937]: I0123 08:33:42.078546 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-s7f9b_708f45a9-e78f-4bfd-8036-2ee32896def2/kube-rbac-proxy/0.log" Jan 23 08:33:42 crc kubenswrapper[4937]: I0123 08:33:42.198783 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-s7f9b_708f45a9-e78f-4bfd-8036-2ee32896def2/nmstate-metrics/0.log" Jan 23 08:33:42 crc kubenswrapper[4937]: I0123 08:33:42.356066 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-ddc6r_d4cc3ae1-2cf6-4ca0-a32b-ffb846bd2036/nmstate-operator/0.log" Jan 23 08:33:42 crc kubenswrapper[4937]: I0123 08:33:42.382476 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-c6htp_505e84ed-8479-4fee-b1f9-e660209e9f6a/nmstate-webhook/0.log" Jan 23 08:33:58 crc kubenswrapper[4937]: I0123 08:33:58.099821 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-cwb6t_bb87646f-fb1a-4ec8-9d5f-6aeb9fdbb8c5/prometheus-operator/0.log" Jan 23 08:33:58 crc kubenswrapper[4937]: I0123 08:33:58.342221 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9_6b2901ce-8ec3-48a2-956f-bb0dfb4a023f/prometheus-operator-admission-webhook/0.log" Jan 23 08:33:58 crc kubenswrapper[4937]: I0123 08:33:58.448913 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4_dcf60bc9-8a7e-493a-a3ee-c7e83e84c59a/prometheus-operator-admission-webhook/0.log" Jan 23 08:33:58 crc kubenswrapper[4937]: I0123 08:33:58.654044 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-htmfn_6c905d28-0ee0-4fb3-8ee4-2268d65d9626/operator/0.log" Jan 23 08:33:58 crc kubenswrapper[4937]: I0123 08:33:58.747205 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-q8swq_22215e13-9220-494c-8402-aeb857926025/perses-operator/0.log" Jan 23 08:34:07 crc kubenswrapper[4937]: I0123 08:34:07.723631 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:34:07 crc kubenswrapper[4937]: I0123 08:34:07.724291 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:34:14 crc kubenswrapper[4937]: I0123 08:34:14.075446 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-stcfs_f1efc753-824c-42de-9e52-198864fee8e6/kube-rbac-proxy/0.log" Jan 23 08:34:14 crc kubenswrapper[4937]: I0123 08:34:14.156772 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-stcfs_f1efc753-824c-42de-9e52-198864fee8e6/controller/0.log" Jan 23 08:34:14 crc kubenswrapper[4937]: I0123 08:34:14.288337 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpm64_b19c38c9-ee52-4326-8101-2148ef37acfc/cp-frr-files/0.log" Jan 23 08:34:14 crc kubenswrapper[4937]: I0123 08:34:14.507148 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpm64_b19c38c9-ee52-4326-8101-2148ef37acfc/cp-metrics/0.log" Jan 23 08:34:14 crc kubenswrapper[4937]: I0123 08:34:14.531212 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpm64_b19c38c9-ee52-4326-8101-2148ef37acfc/cp-reloader/0.log" Jan 23 08:34:14 crc kubenswrapper[4937]: I0123 08:34:14.549230 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpm64_b19c38c9-ee52-4326-8101-2148ef37acfc/cp-frr-files/0.log" Jan 23 08:34:14 crc kubenswrapper[4937]: I0123 08:34:14.572381 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpm64_b19c38c9-ee52-4326-8101-2148ef37acfc/cp-reloader/0.log" Jan 23 08:34:14 crc kubenswrapper[4937]: I0123 08:34:14.958510 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpm64_b19c38c9-ee52-4326-8101-2148ef37acfc/cp-frr-files/0.log" Jan 23 08:34:15 crc kubenswrapper[4937]: I0123 08:34:15.001976 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpm64_b19c38c9-ee52-4326-8101-2148ef37acfc/cp-reloader/0.log" Jan 23 08:34:15 crc kubenswrapper[4937]: I0123 08:34:15.050193 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpm64_b19c38c9-ee52-4326-8101-2148ef37acfc/cp-metrics/0.log" Jan 23 08:34:15 crc kubenswrapper[4937]: I0123 08:34:15.095008 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpm64_b19c38c9-ee52-4326-8101-2148ef37acfc/cp-metrics/0.log" Jan 23 08:34:15 crc kubenswrapper[4937]: I0123 08:34:15.203987 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpm64_b19c38c9-ee52-4326-8101-2148ef37acfc/cp-frr-files/0.log" Jan 23 08:34:15 crc kubenswrapper[4937]: I0123 08:34:15.302984 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpm64_b19c38c9-ee52-4326-8101-2148ef37acfc/cp-reloader/0.log" Jan 23 08:34:15 crc kubenswrapper[4937]: I0123 08:34:15.314039 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpm64_b19c38c9-ee52-4326-8101-2148ef37acfc/cp-metrics/0.log" Jan 23 08:34:15 crc kubenswrapper[4937]: I0123 08:34:15.370465 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpm64_b19c38c9-ee52-4326-8101-2148ef37acfc/controller/0.log" Jan 23 08:34:15 crc kubenswrapper[4937]: I0123 08:34:15.541414 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpm64_b19c38c9-ee52-4326-8101-2148ef37acfc/frr-metrics/0.log" Jan 23 08:34:15 crc kubenswrapper[4937]: I0123 08:34:15.562214 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpm64_b19c38c9-ee52-4326-8101-2148ef37acfc/kube-rbac-proxy/0.log" Jan 23 08:34:15 crc kubenswrapper[4937]: I0123 08:34:15.604259 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpm64_b19c38c9-ee52-4326-8101-2148ef37acfc/kube-rbac-proxy-frr/0.log" Jan 23 08:34:15 crc kubenswrapper[4937]: I0123 08:34:15.781448 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpm64_b19c38c9-ee52-4326-8101-2148ef37acfc/reloader/0.log" Jan 23 08:34:15 crc kubenswrapper[4937]: I0123 08:34:15.950354 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-p8z6s_07fe67ed-5566-4458-9379-9440d8085315/frr-k8s-webhook-server/0.log" Jan 23 08:34:16 crc kubenswrapper[4937]: I0123 08:34:16.167036 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6879d9f67d-9c95s_681d1d3c-b77f-4662-b1c9-6958e568becb/manager/0.log" Jan 23 08:34:16 crc kubenswrapper[4937]: I0123 08:34:16.364362 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-85d9998d95-rnvn8_e7712630-dcf6-4b65-b525-bb63a735a0aa/webhook-server/0.log" Jan 23 08:34:16 crc kubenswrapper[4937]: I0123 08:34:16.513070 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gs9fg_6bfdb6d3-4624-4365-8566-c81e229271da/kube-rbac-proxy/0.log" Jan 23 08:34:17 crc kubenswrapper[4937]: I0123 08:34:17.867438 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gs9fg_6bfdb6d3-4624-4365-8566-c81e229271da/speaker/0.log" Jan 23 08:34:17 crc kubenswrapper[4937]: I0123 08:34:17.919124 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cpm64_b19c38c9-ee52-4326-8101-2148ef37acfc/frr/0.log" Jan 23 08:34:31 crc kubenswrapper[4937]: I0123 08:34:31.328389 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8_d939310b-dd64-4e1d-9b89-715b39b414cb/util/0.log" Jan 23 08:34:31 crc kubenswrapper[4937]: I0123 08:34:31.506418 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8_d939310b-dd64-4e1d-9b89-715b39b414cb/util/0.log" Jan 23 08:34:31 crc kubenswrapper[4937]: I0123 08:34:31.525415 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8_d939310b-dd64-4e1d-9b89-715b39b414cb/pull/0.log" Jan 23 08:34:31 crc kubenswrapper[4937]: I0123 08:34:31.579128 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8_d939310b-dd64-4e1d-9b89-715b39b414cb/pull/0.log" Jan 23 08:34:31 crc kubenswrapper[4937]: I0123 08:34:31.719809 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8_d939310b-dd64-4e1d-9b89-715b39b414cb/pull/0.log" Jan 23 08:34:31 crc kubenswrapper[4937]: I0123 08:34:31.776798 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8_d939310b-dd64-4e1d-9b89-715b39b414cb/util/0.log" Jan 23 08:34:31 crc kubenswrapper[4937]: I0123 08:34:31.803778 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc4gwm8_d939310b-dd64-4e1d-9b89-715b39b414cb/extract/0.log" Jan 23 08:34:31 crc kubenswrapper[4937]: I0123 08:34:31.901264 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t_8aabb2ce-fb24-40a6-9e87-51d402e08895/util/0.log" Jan 23 08:34:32 crc kubenswrapper[4937]: I0123 08:34:32.125005 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t_8aabb2ce-fb24-40a6-9e87-51d402e08895/util/0.log" Jan 23 08:34:32 crc kubenswrapper[4937]: I0123 08:34:32.126724 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t_8aabb2ce-fb24-40a6-9e87-51d402e08895/pull/0.log" Jan 23 08:34:32 crc kubenswrapper[4937]: I0123 08:34:32.132354 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t_8aabb2ce-fb24-40a6-9e87-51d402e08895/pull/0.log" Jan 23 08:34:32 crc kubenswrapper[4937]: I0123 08:34:32.305845 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t_8aabb2ce-fb24-40a6-9e87-51d402e08895/util/0.log" Jan 23 08:34:32 crc kubenswrapper[4937]: I0123 08:34:32.375390 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t_8aabb2ce-fb24-40a6-9e87-51d402e08895/extract/0.log" Jan 23 08:34:32 crc kubenswrapper[4937]: I0123 08:34:32.404202 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j6h6t_8aabb2ce-fb24-40a6-9e87-51d402e08895/pull/0.log" Jan 23 08:34:32 crc kubenswrapper[4937]: I0123 08:34:32.712962 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv_9ab594b7-c995-4eec-b45a-9433a0300440/util/0.log" Jan 23 08:34:32 crc kubenswrapper[4937]: I0123 08:34:32.714821 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv_9ab594b7-c995-4eec-b45a-9433a0300440/util/0.log" Jan 23 08:34:32 crc kubenswrapper[4937]: I0123 08:34:32.734329 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv_9ab594b7-c995-4eec-b45a-9433a0300440/pull/0.log" Jan 23 08:34:32 crc kubenswrapper[4937]: I0123 08:34:32.734349 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv_9ab594b7-c995-4eec-b45a-9433a0300440/pull/0.log" Jan 23 08:34:32 crc kubenswrapper[4937]: I0123 08:34:32.960084 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv_9ab594b7-c995-4eec-b45a-9433a0300440/util/0.log" Jan 23 08:34:32 crc kubenswrapper[4937]: I0123 08:34:32.962387 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv_9ab594b7-c995-4eec-b45a-9433a0300440/pull/0.log" Jan 23 08:34:32 crc kubenswrapper[4937]: I0123 08:34:32.996374 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08wnfwv_9ab594b7-c995-4eec-b45a-9433a0300440/extract/0.log" Jan 23 08:34:33 crc kubenswrapper[4937]: I0123 08:34:33.113113 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6m52s_5379755c-affb-443d-ab53-50eaaf5b5324/extract-utilities/0.log" Jan 23 08:34:33 crc kubenswrapper[4937]: I0123 08:34:33.311130 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6m52s_5379755c-affb-443d-ab53-50eaaf5b5324/extract-utilities/0.log" Jan 23 08:34:33 crc kubenswrapper[4937]: I0123 08:34:33.359759 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6m52s_5379755c-affb-443d-ab53-50eaaf5b5324/extract-content/0.log" Jan 23 08:34:33 crc kubenswrapper[4937]: I0123 08:34:33.378473 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6m52s_5379755c-affb-443d-ab53-50eaaf5b5324/extract-content/0.log" Jan 23 08:34:33 crc kubenswrapper[4937]: I0123 08:34:33.583432 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6m52s_5379755c-affb-443d-ab53-50eaaf5b5324/extract-content/0.log" Jan 23 08:34:33 crc kubenswrapper[4937]: I0123 08:34:33.605635 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6m52s_5379755c-affb-443d-ab53-50eaaf5b5324/extract-utilities/0.log" Jan 23 08:34:33 crc kubenswrapper[4937]: I0123 08:34:33.794929 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vt9wx_ed183f54-661c-4e72-9a2a-cd277c6119d1/extract-utilities/0.log" Jan 23 08:34:34 crc kubenswrapper[4937]: I0123 08:34:34.134098 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vt9wx_ed183f54-661c-4e72-9a2a-cd277c6119d1/extract-content/0.log" Jan 23 08:34:34 crc kubenswrapper[4937]: I0123 08:34:34.139101 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vt9wx_ed183f54-661c-4e72-9a2a-cd277c6119d1/extract-content/0.log" Jan 23 08:34:34 crc kubenswrapper[4937]: I0123 08:34:34.141301 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vt9wx_ed183f54-661c-4e72-9a2a-cd277c6119d1/extract-utilities/0.log" Jan 23 08:34:34 crc kubenswrapper[4937]: I0123 08:34:34.421054 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vt9wx_ed183f54-661c-4e72-9a2a-cd277c6119d1/extract-utilities/0.log" Jan 23 08:34:34 crc kubenswrapper[4937]: I0123 08:34:34.493164 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vt9wx_ed183f54-661c-4e72-9a2a-cd277c6119d1/extract-content/0.log" Jan 23 08:34:34 crc kubenswrapper[4937]: I0123 08:34:34.727303 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6m52s_5379755c-affb-443d-ab53-50eaaf5b5324/registry-server/0.log" Jan 23 08:34:34 crc kubenswrapper[4937]: I0123 08:34:34.756422 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8mqth_936ee148-8015-4156-9a4b-c394c173f197/marketplace-operator/0.log" Jan 23 08:34:34 crc kubenswrapper[4937]: I0123 08:34:34.969751 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ckzjh_170a9df3-c6b9-4ec1-abc1-098640c265c8/extract-utilities/0.log" Jan 23 08:34:35 crc kubenswrapper[4937]: I0123 08:34:35.132114 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ckzjh_170a9df3-c6b9-4ec1-abc1-098640c265c8/extract-utilities/0.log" Jan 23 08:34:35 crc kubenswrapper[4937]: I0123 08:34:35.250042 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ckzjh_170a9df3-c6b9-4ec1-abc1-098640c265c8/extract-content/0.log" Jan 23 08:34:35 crc kubenswrapper[4937]: I0123 08:34:35.276366 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ckzjh_170a9df3-c6b9-4ec1-abc1-098640c265c8/extract-content/0.log" Jan 23 08:34:35 crc kubenswrapper[4937]: I0123 08:34:35.484544 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ckzjh_170a9df3-c6b9-4ec1-abc1-098640c265c8/extract-utilities/0.log" Jan 23 08:34:35 crc kubenswrapper[4937]: I0123 08:34:35.513310 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ckzjh_170a9df3-c6b9-4ec1-abc1-098640c265c8/extract-content/0.log" Jan 23 08:34:35 crc kubenswrapper[4937]: I0123 08:34:35.817735 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lw92f_3700c64c-0a23-4247-aba5-3a4f6da806d3/extract-utilities/0.log" Jan 23 08:34:35 crc kubenswrapper[4937]: I0123 08:34:35.834742 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ckzjh_170a9df3-c6b9-4ec1-abc1-098640c265c8/registry-server/0.log" Jan 23 08:34:35 crc kubenswrapper[4937]: I0123 08:34:35.879029 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vt9wx_ed183f54-661c-4e72-9a2a-cd277c6119d1/registry-server/0.log" Jan 23 08:34:35 crc kubenswrapper[4937]: I0123 08:34:35.941417 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lw92f_3700c64c-0a23-4247-aba5-3a4f6da806d3/extract-utilities/0.log" Jan 23 08:34:35 crc kubenswrapper[4937]: I0123 08:34:35.990312 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lw92f_3700c64c-0a23-4247-aba5-3a4f6da806d3/extract-content/0.log" Jan 23 08:34:36 crc kubenswrapper[4937]: I0123 08:34:36.005260 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lw92f_3700c64c-0a23-4247-aba5-3a4f6da806d3/extract-content/0.log" Jan 23 08:34:36 crc kubenswrapper[4937]: I0123 08:34:36.182176 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lw92f_3700c64c-0a23-4247-aba5-3a4f6da806d3/extract-utilities/0.log" Jan 23 08:34:36 crc kubenswrapper[4937]: I0123 08:34:36.204946 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lw92f_3700c64c-0a23-4247-aba5-3a4f6da806d3/extract-content/0.log" Jan 23 08:34:37 crc kubenswrapper[4937]: I0123 08:34:37.232579 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lw92f_3700c64c-0a23-4247-aba5-3a4f6da806d3/registry-server/0.log" Jan 23 08:34:37 crc kubenswrapper[4937]: I0123 08:34:37.724053 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:34:37 crc kubenswrapper[4937]: I0123 08:34:37.724125 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:34:37 crc kubenswrapper[4937]: I0123 08:34:37.724171 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 08:34:37 crc kubenswrapper[4937]: I0123 08:34:37.724963 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a5edf7b95091b558c59a7f1a2a7be767f6ee4fa6b350a35d1b9e2c918a3e96d4"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:34:37 crc kubenswrapper[4937]: I0123 08:34:37.725036 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://a5edf7b95091b558c59a7f1a2a7be767f6ee4fa6b350a35d1b9e2c918a3e96d4" gracePeriod=600 Jan 23 08:34:38 crc kubenswrapper[4937]: I0123 08:34:38.852057 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="a5edf7b95091b558c59a7f1a2a7be767f6ee4fa6b350a35d1b9e2c918a3e96d4" exitCode=0 Jan 23 08:34:38 crc kubenswrapper[4937]: I0123 08:34:38.852135 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"a5edf7b95091b558c59a7f1a2a7be767f6ee4fa6b350a35d1b9e2c918a3e96d4"} Jan 23 08:34:38 crc kubenswrapper[4937]: I0123 08:34:38.852652 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerStarted","Data":"ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2"} Jan 23 08:34:38 crc kubenswrapper[4937]: I0123 08:34:38.852682 4937 scope.go:117] "RemoveContainer" containerID="3b8d5ddd9863ab162cbd80391d46c26c15bcfb71fed524256a0072abb125647b" Jan 23 08:34:49 crc kubenswrapper[4937]: I0123 08:34:49.035926 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-cwb6t_bb87646f-fb1a-4ec8-9d5f-6aeb9fdbb8c5/prometheus-operator/0.log" Jan 23 08:34:49 crc kubenswrapper[4937]: I0123 08:34:49.080668 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-9bfbdb56f-5zhf9_6b2901ce-8ec3-48a2-956f-bb0dfb4a023f/prometheus-operator-admission-webhook/0.log" Jan 23 08:34:49 crc kubenswrapper[4937]: I0123 08:34:49.118458 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-9bfbdb56f-mtrb4_dcf60bc9-8a7e-493a-a3ee-c7e83e84c59a/prometheus-operator-admission-webhook/0.log" Jan 23 08:34:49 crc kubenswrapper[4937]: I0123 08:34:49.285162 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-q8swq_22215e13-9220-494c-8402-aeb857926025/perses-operator/0.log" Jan 23 08:34:49 crc kubenswrapper[4937]: I0123 08:34:49.324272 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-htmfn_6c905d28-0ee0-4fb3-8ee4-2268d65d9626/operator/0.log" Jan 23 08:35:12 crc kubenswrapper[4937]: E0123 08:35:12.897070 4937 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.64:39442->38.102.83.64:36421: write tcp 38.102.83.64:39442->38.102.83.64:36421: write: broken pipe Jan 23 08:36:55 crc kubenswrapper[4937]: I0123 08:36:55.695427 4937 generic.go:334] "Generic (PLEG): container finished" podID="ab264037-efa1-4d05-9bc9-028d34f90a92" containerID="4c367ad23f817bd64dada5bb45ea39322615e1ec976884177ef427dfcd040ebb" exitCode=0 Jan 23 08:36:55 crc kubenswrapper[4937]: I0123 08:36:55.695487 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g48rl/must-gather-t9dpk" event={"ID":"ab264037-efa1-4d05-9bc9-028d34f90a92","Type":"ContainerDied","Data":"4c367ad23f817bd64dada5bb45ea39322615e1ec976884177ef427dfcd040ebb"} Jan 23 08:36:55 crc kubenswrapper[4937]: I0123 08:36:55.696438 4937 scope.go:117] "RemoveContainer" containerID="4c367ad23f817bd64dada5bb45ea39322615e1ec976884177ef427dfcd040ebb" Jan 23 08:36:56 crc kubenswrapper[4937]: I0123 08:36:56.046900 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-g48rl_must-gather-t9dpk_ab264037-efa1-4d05-9bc9-028d34f90a92/gather/0.log" Jan 23 08:37:04 crc kubenswrapper[4937]: I0123 08:37:04.877458 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-g48rl/must-gather-t9dpk"] Jan 23 08:37:04 crc kubenswrapper[4937]: I0123 08:37:04.878192 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-g48rl/must-gather-t9dpk" podUID="ab264037-efa1-4d05-9bc9-028d34f90a92" containerName="copy" containerID="cri-o://9d35fa85e84fe3a93361ce86d19b6450cd3f92bc806ef7ce424e138617c5926f" gracePeriod=2 Jan 23 08:37:04 crc kubenswrapper[4937]: I0123 08:37:04.891245 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-g48rl/must-gather-t9dpk"] Jan 23 08:37:05 crc kubenswrapper[4937]: I0123 08:37:05.334033 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-g48rl_must-gather-t9dpk_ab264037-efa1-4d05-9bc9-028d34f90a92/copy/0.log" Jan 23 08:37:05 crc kubenswrapper[4937]: I0123 08:37:05.335166 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g48rl/must-gather-t9dpk" Jan 23 08:37:05 crc kubenswrapper[4937]: I0123 08:37:05.520237 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76zdb\" (UniqueName: \"kubernetes.io/projected/ab264037-efa1-4d05-9bc9-028d34f90a92-kube-api-access-76zdb\") pod \"ab264037-efa1-4d05-9bc9-028d34f90a92\" (UID: \"ab264037-efa1-4d05-9bc9-028d34f90a92\") " Jan 23 08:37:05 crc kubenswrapper[4937]: I0123 08:37:05.520302 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ab264037-efa1-4d05-9bc9-028d34f90a92-must-gather-output\") pod \"ab264037-efa1-4d05-9bc9-028d34f90a92\" (UID: \"ab264037-efa1-4d05-9bc9-028d34f90a92\") " Jan 23 08:37:05 crc kubenswrapper[4937]: I0123 08:37:05.529815 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab264037-efa1-4d05-9bc9-028d34f90a92-kube-api-access-76zdb" (OuterVolumeSpecName: "kube-api-access-76zdb") pod "ab264037-efa1-4d05-9bc9-028d34f90a92" (UID: "ab264037-efa1-4d05-9bc9-028d34f90a92"). InnerVolumeSpecName "kube-api-access-76zdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:37:05 crc kubenswrapper[4937]: I0123 08:37:05.623362 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76zdb\" (UniqueName: \"kubernetes.io/projected/ab264037-efa1-4d05-9bc9-028d34f90a92-kube-api-access-76zdb\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:05 crc kubenswrapper[4937]: I0123 08:37:05.734961 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab264037-efa1-4d05-9bc9-028d34f90a92-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ab264037-efa1-4d05-9bc9-028d34f90a92" (UID: "ab264037-efa1-4d05-9bc9-028d34f90a92"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:37:05 crc kubenswrapper[4937]: I0123 08:37:05.790255 4937 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-g48rl_must-gather-t9dpk_ab264037-efa1-4d05-9bc9-028d34f90a92/copy/0.log" Jan 23 08:37:05 crc kubenswrapper[4937]: I0123 08:37:05.790994 4937 generic.go:334] "Generic (PLEG): container finished" podID="ab264037-efa1-4d05-9bc9-028d34f90a92" containerID="9d35fa85e84fe3a93361ce86d19b6450cd3f92bc806ef7ce424e138617c5926f" exitCode=143 Jan 23 08:37:05 crc kubenswrapper[4937]: I0123 08:37:05.791064 4937 scope.go:117] "RemoveContainer" containerID="9d35fa85e84fe3a93361ce86d19b6450cd3f92bc806ef7ce424e138617c5926f" Jan 23 08:37:05 crc kubenswrapper[4937]: I0123 08:37:05.791102 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g48rl/must-gather-t9dpk" Jan 23 08:37:05 crc kubenswrapper[4937]: I0123 08:37:05.828245 4937 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ab264037-efa1-4d05-9bc9-028d34f90a92-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 23 08:37:05 crc kubenswrapper[4937]: I0123 08:37:05.828780 4937 scope.go:117] "RemoveContainer" containerID="4c367ad23f817bd64dada5bb45ea39322615e1ec976884177ef427dfcd040ebb" Jan 23 08:37:05 crc kubenswrapper[4937]: I0123 08:37:05.918867 4937 scope.go:117] "RemoveContainer" containerID="9d35fa85e84fe3a93361ce86d19b6450cd3f92bc806ef7ce424e138617c5926f" Jan 23 08:37:05 crc kubenswrapper[4937]: E0123 08:37:05.919362 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d35fa85e84fe3a93361ce86d19b6450cd3f92bc806ef7ce424e138617c5926f\": container with ID starting with 9d35fa85e84fe3a93361ce86d19b6450cd3f92bc806ef7ce424e138617c5926f not found: ID does not exist" containerID="9d35fa85e84fe3a93361ce86d19b6450cd3f92bc806ef7ce424e138617c5926f" Jan 23 08:37:05 crc kubenswrapper[4937]: I0123 08:37:05.919395 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d35fa85e84fe3a93361ce86d19b6450cd3f92bc806ef7ce424e138617c5926f"} err="failed to get container status \"9d35fa85e84fe3a93361ce86d19b6450cd3f92bc806ef7ce424e138617c5926f\": rpc error: code = NotFound desc = could not find container \"9d35fa85e84fe3a93361ce86d19b6450cd3f92bc806ef7ce424e138617c5926f\": container with ID starting with 9d35fa85e84fe3a93361ce86d19b6450cd3f92bc806ef7ce424e138617c5926f not found: ID does not exist" Jan 23 08:37:05 crc kubenswrapper[4937]: I0123 08:37:05.919421 4937 scope.go:117] "RemoveContainer" containerID="4c367ad23f817bd64dada5bb45ea39322615e1ec976884177ef427dfcd040ebb" Jan 23 08:37:05 crc kubenswrapper[4937]: E0123 08:37:05.919738 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c367ad23f817bd64dada5bb45ea39322615e1ec976884177ef427dfcd040ebb\": container with ID starting with 4c367ad23f817bd64dada5bb45ea39322615e1ec976884177ef427dfcd040ebb not found: ID does not exist" containerID="4c367ad23f817bd64dada5bb45ea39322615e1ec976884177ef427dfcd040ebb" Jan 23 08:37:05 crc kubenswrapper[4937]: I0123 08:37:05.919813 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c367ad23f817bd64dada5bb45ea39322615e1ec976884177ef427dfcd040ebb"} err="failed to get container status \"4c367ad23f817bd64dada5bb45ea39322615e1ec976884177ef427dfcd040ebb\": rpc error: code = NotFound desc = could not find container \"4c367ad23f817bd64dada5bb45ea39322615e1ec976884177ef427dfcd040ebb\": container with ID starting with 4c367ad23f817bd64dada5bb45ea39322615e1ec976884177ef427dfcd040ebb not found: ID does not exist" Jan 23 08:37:06 crc kubenswrapper[4937]: I0123 08:37:06.541889 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab264037-efa1-4d05-9bc9-028d34f90a92" path="/var/lib/kubelet/pods/ab264037-efa1-4d05-9bc9-028d34f90a92/volumes" Jan 23 08:37:07 crc kubenswrapper[4937]: I0123 08:37:07.724406 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:37:07 crc kubenswrapper[4937]: I0123 08:37:07.725821 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:37:37 crc kubenswrapper[4937]: I0123 08:37:37.724378 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:37:37 crc kubenswrapper[4937]: I0123 08:37:37.724972 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.660993 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qsc55"] Jan 23 08:38:03 crc kubenswrapper[4937]: E0123 08:38:03.662050 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6484157-a328-4ec8-b844-4518e7c7c097" containerName="extract-content" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.662069 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6484157-a328-4ec8-b844-4518e7c7c097" containerName="extract-content" Jan 23 08:38:03 crc kubenswrapper[4937]: E0123 08:38:03.662091 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab264037-efa1-4d05-9bc9-028d34f90a92" containerName="gather" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.662116 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab264037-efa1-4d05-9bc9-028d34f90a92" containerName="gather" Jan 23 08:38:03 crc kubenswrapper[4937]: E0123 08:38:03.662128 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab264037-efa1-4d05-9bc9-028d34f90a92" containerName="copy" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.662159 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab264037-efa1-4d05-9bc9-028d34f90a92" containerName="copy" Jan 23 08:38:03 crc kubenswrapper[4937]: E0123 08:38:03.662176 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6484157-a328-4ec8-b844-4518e7c7c097" containerName="extract-utilities" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.662184 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6484157-a328-4ec8-b844-4518e7c7c097" containerName="extract-utilities" Jan 23 08:38:03 crc kubenswrapper[4937]: E0123 08:38:03.662203 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6484157-a328-4ec8-b844-4518e7c7c097" containerName="registry-server" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.662210 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6484157-a328-4ec8-b844-4518e7c7c097" containerName="registry-server" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.662449 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6484157-a328-4ec8-b844-4518e7c7c097" containerName="registry-server" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.662469 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab264037-efa1-4d05-9bc9-028d34f90a92" containerName="gather" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.662489 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab264037-efa1-4d05-9bc9-028d34f90a92" containerName="copy" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.664490 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsc55" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.675974 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsc55"] Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.780958 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9431e58-a6bd-4a80-9866-fccf83bdeaf5-utilities\") pod \"redhat-marketplace-qsc55\" (UID: \"e9431e58-a6bd-4a80-9866-fccf83bdeaf5\") " pod="openshift-marketplace/redhat-marketplace-qsc55" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.781020 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9431e58-a6bd-4a80-9866-fccf83bdeaf5-catalog-content\") pod \"redhat-marketplace-qsc55\" (UID: \"e9431e58-a6bd-4a80-9866-fccf83bdeaf5\") " pod="openshift-marketplace/redhat-marketplace-qsc55" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.781314 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4spp7\" (UniqueName: \"kubernetes.io/projected/e9431e58-a6bd-4a80-9866-fccf83bdeaf5-kube-api-access-4spp7\") pod \"redhat-marketplace-qsc55\" (UID: \"e9431e58-a6bd-4a80-9866-fccf83bdeaf5\") " pod="openshift-marketplace/redhat-marketplace-qsc55" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.884106 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4spp7\" (UniqueName: \"kubernetes.io/projected/e9431e58-a6bd-4a80-9866-fccf83bdeaf5-kube-api-access-4spp7\") pod \"redhat-marketplace-qsc55\" (UID: \"e9431e58-a6bd-4a80-9866-fccf83bdeaf5\") " pod="openshift-marketplace/redhat-marketplace-qsc55" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.884253 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9431e58-a6bd-4a80-9866-fccf83bdeaf5-utilities\") pod \"redhat-marketplace-qsc55\" (UID: \"e9431e58-a6bd-4a80-9866-fccf83bdeaf5\") " pod="openshift-marketplace/redhat-marketplace-qsc55" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.884314 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9431e58-a6bd-4a80-9866-fccf83bdeaf5-catalog-content\") pod \"redhat-marketplace-qsc55\" (UID: \"e9431e58-a6bd-4a80-9866-fccf83bdeaf5\") " pod="openshift-marketplace/redhat-marketplace-qsc55" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.884883 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9431e58-a6bd-4a80-9866-fccf83bdeaf5-catalog-content\") pod \"redhat-marketplace-qsc55\" (UID: \"e9431e58-a6bd-4a80-9866-fccf83bdeaf5\") " pod="openshift-marketplace/redhat-marketplace-qsc55" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.885164 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9431e58-a6bd-4a80-9866-fccf83bdeaf5-utilities\") pod \"redhat-marketplace-qsc55\" (UID: \"e9431e58-a6bd-4a80-9866-fccf83bdeaf5\") " pod="openshift-marketplace/redhat-marketplace-qsc55" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.915392 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4spp7\" (UniqueName: \"kubernetes.io/projected/e9431e58-a6bd-4a80-9866-fccf83bdeaf5-kube-api-access-4spp7\") pod \"redhat-marketplace-qsc55\" (UID: \"e9431e58-a6bd-4a80-9866-fccf83bdeaf5\") " pod="openshift-marketplace/redhat-marketplace-qsc55" Jan 23 08:38:03 crc kubenswrapper[4937]: I0123 08:38:03.995999 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsc55" Jan 23 08:38:04 crc kubenswrapper[4937]: I0123 08:38:04.548351 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsc55"] Jan 23 08:38:05 crc kubenswrapper[4937]: I0123 08:38:05.369853 4937 generic.go:334] "Generic (PLEG): container finished" podID="e9431e58-a6bd-4a80-9866-fccf83bdeaf5" containerID="27f8089ecc40d88c850c37bad96bba1285ca689f3410bb3882d8c239194d29ec" exitCode=0 Jan 23 08:38:05 crc kubenswrapper[4937]: I0123 08:38:05.369923 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsc55" event={"ID":"e9431e58-a6bd-4a80-9866-fccf83bdeaf5","Type":"ContainerDied","Data":"27f8089ecc40d88c850c37bad96bba1285ca689f3410bb3882d8c239194d29ec"} Jan 23 08:38:05 crc kubenswrapper[4937]: I0123 08:38:05.370801 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsc55" event={"ID":"e9431e58-a6bd-4a80-9866-fccf83bdeaf5","Type":"ContainerStarted","Data":"9bfe5766488c799edc9bd868ed03713ecdfb6dd5adb4e1206fb5d7c87f4235c4"} Jan 23 08:38:05 crc kubenswrapper[4937]: I0123 08:38:05.372716 4937 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:38:06 crc kubenswrapper[4937]: I0123 08:38:06.383126 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsc55" event={"ID":"e9431e58-a6bd-4a80-9866-fccf83bdeaf5","Type":"ContainerStarted","Data":"476e335647fceeebf4067bde33c8898798ddd8234deb9d4970b7a71428faa010"} Jan 23 08:38:07 crc kubenswrapper[4937]: I0123 08:38:07.392706 4937 generic.go:334] "Generic (PLEG): container finished" podID="e9431e58-a6bd-4a80-9866-fccf83bdeaf5" containerID="476e335647fceeebf4067bde33c8898798ddd8234deb9d4970b7a71428faa010" exitCode=0 Jan 23 08:38:07 crc kubenswrapper[4937]: I0123 08:38:07.392752 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsc55" event={"ID":"e9431e58-a6bd-4a80-9866-fccf83bdeaf5","Type":"ContainerDied","Data":"476e335647fceeebf4067bde33c8898798ddd8234deb9d4970b7a71428faa010"} Jan 23 08:38:07 crc kubenswrapper[4937]: I0123 08:38:07.723709 4937 patch_prober.go:28] interesting pod/machine-config-daemon-bglvs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:38:07 crc kubenswrapper[4937]: I0123 08:38:07.724049 4937 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:38:07 crc kubenswrapper[4937]: I0123 08:38:07.724097 4937 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" Jan 23 08:38:07 crc kubenswrapper[4937]: I0123 08:38:07.724974 4937 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2"} pod="openshift-machine-config-operator/machine-config-daemon-bglvs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:38:07 crc kubenswrapper[4937]: I0123 08:38:07.725053 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerName="machine-config-daemon" containerID="cri-o://ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" gracePeriod=600 Jan 23 08:38:07 crc kubenswrapper[4937]: E0123 08:38:07.857330 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:38:08 crc kubenswrapper[4937]: I0123 08:38:08.403110 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsc55" event={"ID":"e9431e58-a6bd-4a80-9866-fccf83bdeaf5","Type":"ContainerStarted","Data":"af7acab5f98c9f35c7c4833105f4c8b92649d953fd4a8bb79a6077197c99ba9b"} Jan 23 08:38:08 crc kubenswrapper[4937]: I0123 08:38:08.406031 4937 generic.go:334] "Generic (PLEG): container finished" podID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" exitCode=0 Jan 23 08:38:08 crc kubenswrapper[4937]: I0123 08:38:08.406111 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" event={"ID":"2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9","Type":"ContainerDied","Data":"ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2"} Jan 23 08:38:08 crc kubenswrapper[4937]: I0123 08:38:08.406170 4937 scope.go:117] "RemoveContainer" containerID="a5edf7b95091b558c59a7f1a2a7be767f6ee4fa6b350a35d1b9e2c918a3e96d4" Jan 23 08:38:08 crc kubenswrapper[4937]: I0123 08:38:08.407096 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:38:08 crc kubenswrapper[4937]: E0123 08:38:08.407406 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:38:08 crc kubenswrapper[4937]: I0123 08:38:08.428553 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qsc55" podStartSLOduration=2.768258682 podStartE2EDuration="5.428534619s" podCreationTimestamp="2026-01-23 08:38:03 +0000 UTC" firstStartedPulling="2026-01-23 08:38:05.372328916 +0000 UTC m=+7485.176095559" lastFinishedPulling="2026-01-23 08:38:08.032604843 +0000 UTC m=+7487.836371496" observedRunningTime="2026-01-23 08:38:08.422492654 +0000 UTC m=+7488.226259307" watchObservedRunningTime="2026-01-23 08:38:08.428534619 +0000 UTC m=+7488.232301272" Jan 23 08:38:10 crc kubenswrapper[4937]: I0123 08:38:10.838750 4937 scope.go:117] "RemoveContainer" containerID="27f795b90e332bbbb886db307f5b9a1ece8b901f45b806a6ba9e385bddf9766b" Jan 23 08:38:13 crc kubenswrapper[4937]: I0123 08:38:13.996219 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qsc55" Jan 23 08:38:13 crc kubenswrapper[4937]: I0123 08:38:13.996531 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qsc55" Jan 23 08:38:14 crc kubenswrapper[4937]: I0123 08:38:14.055659 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qsc55" Jan 23 08:38:14 crc kubenswrapper[4937]: I0123 08:38:14.525257 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qsc55" Jan 23 08:38:14 crc kubenswrapper[4937]: I0123 08:38:14.575832 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsc55"] Jan 23 08:38:16 crc kubenswrapper[4937]: I0123 08:38:16.495205 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qsc55" podUID="e9431e58-a6bd-4a80-9866-fccf83bdeaf5" containerName="registry-server" containerID="cri-o://af7acab5f98c9f35c7c4833105f4c8b92649d953fd4a8bb79a6077197c99ba9b" gracePeriod=2 Jan 23 08:38:16 crc kubenswrapper[4937]: I0123 08:38:16.972540 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsc55" Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.070514 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9431e58-a6bd-4a80-9866-fccf83bdeaf5-catalog-content\") pod \"e9431e58-a6bd-4a80-9866-fccf83bdeaf5\" (UID: \"e9431e58-a6bd-4a80-9866-fccf83bdeaf5\") " Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.070689 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9431e58-a6bd-4a80-9866-fccf83bdeaf5-utilities\") pod \"e9431e58-a6bd-4a80-9866-fccf83bdeaf5\" (UID: \"e9431e58-a6bd-4a80-9866-fccf83bdeaf5\") " Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.070799 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4spp7\" (UniqueName: \"kubernetes.io/projected/e9431e58-a6bd-4a80-9866-fccf83bdeaf5-kube-api-access-4spp7\") pod \"e9431e58-a6bd-4a80-9866-fccf83bdeaf5\" (UID: \"e9431e58-a6bd-4a80-9866-fccf83bdeaf5\") " Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.072837 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9431e58-a6bd-4a80-9866-fccf83bdeaf5-utilities" (OuterVolumeSpecName: "utilities") pod "e9431e58-a6bd-4a80-9866-fccf83bdeaf5" (UID: "e9431e58-a6bd-4a80-9866-fccf83bdeaf5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.080801 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9431e58-a6bd-4a80-9866-fccf83bdeaf5-kube-api-access-4spp7" (OuterVolumeSpecName: "kube-api-access-4spp7") pod "e9431e58-a6bd-4a80-9866-fccf83bdeaf5" (UID: "e9431e58-a6bd-4a80-9866-fccf83bdeaf5"). InnerVolumeSpecName "kube-api-access-4spp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.093968 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9431e58-a6bd-4a80-9866-fccf83bdeaf5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9431e58-a6bd-4a80-9866-fccf83bdeaf5" (UID: "e9431e58-a6bd-4a80-9866-fccf83bdeaf5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.173480 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9431e58-a6bd-4a80-9866-fccf83bdeaf5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.173515 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9431e58-a6bd-4a80-9866-fccf83bdeaf5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.173525 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4spp7\" (UniqueName: \"kubernetes.io/projected/e9431e58-a6bd-4a80-9866-fccf83bdeaf5-kube-api-access-4spp7\") on node \"crc\" DevicePath \"\"" Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.505369 4937 generic.go:334] "Generic (PLEG): container finished" podID="e9431e58-a6bd-4a80-9866-fccf83bdeaf5" containerID="af7acab5f98c9f35c7c4833105f4c8b92649d953fd4a8bb79a6077197c99ba9b" exitCode=0 Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.505413 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsc55" event={"ID":"e9431e58-a6bd-4a80-9866-fccf83bdeaf5","Type":"ContainerDied","Data":"af7acab5f98c9f35c7c4833105f4c8b92649d953fd4a8bb79a6077197c99ba9b"} Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.505448 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qsc55" event={"ID":"e9431e58-a6bd-4a80-9866-fccf83bdeaf5","Type":"ContainerDied","Data":"9bfe5766488c799edc9bd868ed03713ecdfb6dd5adb4e1206fb5d7c87f4235c4"} Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.505445 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qsc55" Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.505536 4937 scope.go:117] "RemoveContainer" containerID="af7acab5f98c9f35c7c4833105f4c8b92649d953fd4a8bb79a6077197c99ba9b" Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.528801 4937 scope.go:117] "RemoveContainer" containerID="476e335647fceeebf4067bde33c8898798ddd8234deb9d4970b7a71428faa010" Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.548709 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsc55"] Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.558565 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qsc55"] Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.563729 4937 scope.go:117] "RemoveContainer" containerID="27f8089ecc40d88c850c37bad96bba1285ca689f3410bb3882d8c239194d29ec" Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.615158 4937 scope.go:117] "RemoveContainer" containerID="af7acab5f98c9f35c7c4833105f4c8b92649d953fd4a8bb79a6077197c99ba9b" Jan 23 08:38:17 crc kubenswrapper[4937]: E0123 08:38:17.615571 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af7acab5f98c9f35c7c4833105f4c8b92649d953fd4a8bb79a6077197c99ba9b\": container with ID starting with af7acab5f98c9f35c7c4833105f4c8b92649d953fd4a8bb79a6077197c99ba9b not found: ID does not exist" containerID="af7acab5f98c9f35c7c4833105f4c8b92649d953fd4a8bb79a6077197c99ba9b" Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.615616 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af7acab5f98c9f35c7c4833105f4c8b92649d953fd4a8bb79a6077197c99ba9b"} err="failed to get container status \"af7acab5f98c9f35c7c4833105f4c8b92649d953fd4a8bb79a6077197c99ba9b\": rpc error: code = NotFound desc = could not find container \"af7acab5f98c9f35c7c4833105f4c8b92649d953fd4a8bb79a6077197c99ba9b\": container with ID starting with af7acab5f98c9f35c7c4833105f4c8b92649d953fd4a8bb79a6077197c99ba9b not found: ID does not exist" Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.615646 4937 scope.go:117] "RemoveContainer" containerID="476e335647fceeebf4067bde33c8898798ddd8234deb9d4970b7a71428faa010" Jan 23 08:38:17 crc kubenswrapper[4937]: E0123 08:38:17.615855 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"476e335647fceeebf4067bde33c8898798ddd8234deb9d4970b7a71428faa010\": container with ID starting with 476e335647fceeebf4067bde33c8898798ddd8234deb9d4970b7a71428faa010 not found: ID does not exist" containerID="476e335647fceeebf4067bde33c8898798ddd8234deb9d4970b7a71428faa010" Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.615879 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"476e335647fceeebf4067bde33c8898798ddd8234deb9d4970b7a71428faa010"} err="failed to get container status \"476e335647fceeebf4067bde33c8898798ddd8234deb9d4970b7a71428faa010\": rpc error: code = NotFound desc = could not find container \"476e335647fceeebf4067bde33c8898798ddd8234deb9d4970b7a71428faa010\": container with ID starting with 476e335647fceeebf4067bde33c8898798ddd8234deb9d4970b7a71428faa010 not found: ID does not exist" Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.615892 4937 scope.go:117] "RemoveContainer" containerID="27f8089ecc40d88c850c37bad96bba1285ca689f3410bb3882d8c239194d29ec" Jan 23 08:38:17 crc kubenswrapper[4937]: E0123 08:38:17.616102 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27f8089ecc40d88c850c37bad96bba1285ca689f3410bb3882d8c239194d29ec\": container with ID starting with 27f8089ecc40d88c850c37bad96bba1285ca689f3410bb3882d8c239194d29ec not found: ID does not exist" containerID="27f8089ecc40d88c850c37bad96bba1285ca689f3410bb3882d8c239194d29ec" Jan 23 08:38:17 crc kubenswrapper[4937]: I0123 08:38:17.616151 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27f8089ecc40d88c850c37bad96bba1285ca689f3410bb3882d8c239194d29ec"} err="failed to get container status \"27f8089ecc40d88c850c37bad96bba1285ca689f3410bb3882d8c239194d29ec\": rpc error: code = NotFound desc = could not find container \"27f8089ecc40d88c850c37bad96bba1285ca689f3410bb3882d8c239194d29ec\": container with ID starting with 27f8089ecc40d88c850c37bad96bba1285ca689f3410bb3882d8c239194d29ec not found: ID does not exist" Jan 23 08:38:18 crc kubenswrapper[4937]: I0123 08:38:18.538792 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9431e58-a6bd-4a80-9866-fccf83bdeaf5" path="/var/lib/kubelet/pods/e9431e58-a6bd-4a80-9866-fccf83bdeaf5/volumes" Jan 23 08:38:20 crc kubenswrapper[4937]: I0123 08:38:20.537513 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:38:20 crc kubenswrapper[4937]: E0123 08:38:20.538226 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:38:32 crc kubenswrapper[4937]: I0123 08:38:32.527364 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:38:32 crc kubenswrapper[4937]: E0123 08:38:32.528328 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:38:43 crc kubenswrapper[4937]: I0123 08:38:43.526661 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:38:43 crc kubenswrapper[4937]: E0123 08:38:43.527480 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:38:54 crc kubenswrapper[4937]: I0123 08:38:54.527404 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:38:54 crc kubenswrapper[4937]: E0123 08:38:54.528386 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:39:08 crc kubenswrapper[4937]: I0123 08:39:08.526789 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:39:08 crc kubenswrapper[4937]: E0123 08:39:08.527584 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:39:20 crc kubenswrapper[4937]: I0123 08:39:20.536024 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:39:20 crc kubenswrapper[4937]: E0123 08:39:20.537958 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:39:31 crc kubenswrapper[4937]: I0123 08:39:31.526806 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:39:31 crc kubenswrapper[4937]: E0123 08:39:31.527631 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:39:45 crc kubenswrapper[4937]: I0123 08:39:45.525950 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:39:45 crc kubenswrapper[4937]: E0123 08:39:45.526761 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:39:57 crc kubenswrapper[4937]: I0123 08:39:57.526879 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:39:57 crc kubenswrapper[4937]: E0123 08:39:57.528901 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:40:08 crc kubenswrapper[4937]: I0123 08:40:08.526016 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:40:08 crc kubenswrapper[4937]: E0123 08:40:08.527127 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:40:19 crc kubenswrapper[4937]: I0123 08:40:19.150163 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fswtw"] Jan 23 08:40:19 crc kubenswrapper[4937]: E0123 08:40:19.151108 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9431e58-a6bd-4a80-9866-fccf83bdeaf5" containerName="extract-utilities" Jan 23 08:40:19 crc kubenswrapper[4937]: I0123 08:40:19.151120 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9431e58-a6bd-4a80-9866-fccf83bdeaf5" containerName="extract-utilities" Jan 23 08:40:19 crc kubenswrapper[4937]: E0123 08:40:19.151141 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9431e58-a6bd-4a80-9866-fccf83bdeaf5" containerName="extract-content" Jan 23 08:40:19 crc kubenswrapper[4937]: I0123 08:40:19.151147 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9431e58-a6bd-4a80-9866-fccf83bdeaf5" containerName="extract-content" Jan 23 08:40:19 crc kubenswrapper[4937]: E0123 08:40:19.151162 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9431e58-a6bd-4a80-9866-fccf83bdeaf5" containerName="registry-server" Jan 23 08:40:19 crc kubenswrapper[4937]: I0123 08:40:19.151169 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9431e58-a6bd-4a80-9866-fccf83bdeaf5" containerName="registry-server" Jan 23 08:40:19 crc kubenswrapper[4937]: I0123 08:40:19.151363 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9431e58-a6bd-4a80-9866-fccf83bdeaf5" containerName="registry-server" Jan 23 08:40:19 crc kubenswrapper[4937]: I0123 08:40:19.152798 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fswtw" Jan 23 08:40:19 crc kubenswrapper[4937]: I0123 08:40:19.178783 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fswtw"] Jan 23 08:40:19 crc kubenswrapper[4937]: I0123 08:40:19.237045 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knfhx\" (UniqueName: \"kubernetes.io/projected/6a73d206-aed0-452a-9df3-08e7b5808489-kube-api-access-knfhx\") pod \"community-operators-fswtw\" (UID: \"6a73d206-aed0-452a-9df3-08e7b5808489\") " pod="openshift-marketplace/community-operators-fswtw" Jan 23 08:40:19 crc kubenswrapper[4937]: I0123 08:40:19.237262 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a73d206-aed0-452a-9df3-08e7b5808489-catalog-content\") pod \"community-operators-fswtw\" (UID: \"6a73d206-aed0-452a-9df3-08e7b5808489\") " pod="openshift-marketplace/community-operators-fswtw" Jan 23 08:40:19 crc kubenswrapper[4937]: I0123 08:40:19.237308 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a73d206-aed0-452a-9df3-08e7b5808489-utilities\") pod \"community-operators-fswtw\" (UID: \"6a73d206-aed0-452a-9df3-08e7b5808489\") " pod="openshift-marketplace/community-operators-fswtw" Jan 23 08:40:19 crc kubenswrapper[4937]: I0123 08:40:19.339902 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knfhx\" (UniqueName: \"kubernetes.io/projected/6a73d206-aed0-452a-9df3-08e7b5808489-kube-api-access-knfhx\") pod \"community-operators-fswtw\" (UID: \"6a73d206-aed0-452a-9df3-08e7b5808489\") " pod="openshift-marketplace/community-operators-fswtw" Jan 23 08:40:19 crc kubenswrapper[4937]: I0123 08:40:19.340009 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a73d206-aed0-452a-9df3-08e7b5808489-catalog-content\") pod \"community-operators-fswtw\" (UID: \"6a73d206-aed0-452a-9df3-08e7b5808489\") " pod="openshift-marketplace/community-operators-fswtw" Jan 23 08:40:19 crc kubenswrapper[4937]: I0123 08:40:19.340041 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a73d206-aed0-452a-9df3-08e7b5808489-utilities\") pod \"community-operators-fswtw\" (UID: \"6a73d206-aed0-452a-9df3-08e7b5808489\") " pod="openshift-marketplace/community-operators-fswtw" Jan 23 08:40:19 crc kubenswrapper[4937]: I0123 08:40:19.340788 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a73d206-aed0-452a-9df3-08e7b5808489-catalog-content\") pod \"community-operators-fswtw\" (UID: \"6a73d206-aed0-452a-9df3-08e7b5808489\") " pod="openshift-marketplace/community-operators-fswtw" Jan 23 08:40:19 crc kubenswrapper[4937]: I0123 08:40:19.340898 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a73d206-aed0-452a-9df3-08e7b5808489-utilities\") pod \"community-operators-fswtw\" (UID: \"6a73d206-aed0-452a-9df3-08e7b5808489\") " pod="openshift-marketplace/community-operators-fswtw" Jan 23 08:40:19 crc kubenswrapper[4937]: I0123 08:40:19.359490 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knfhx\" (UniqueName: \"kubernetes.io/projected/6a73d206-aed0-452a-9df3-08e7b5808489-kube-api-access-knfhx\") pod \"community-operators-fswtw\" (UID: \"6a73d206-aed0-452a-9df3-08e7b5808489\") " pod="openshift-marketplace/community-operators-fswtw" Jan 23 08:40:19 crc kubenswrapper[4937]: I0123 08:40:19.531639 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fswtw" Jan 23 08:40:20 crc kubenswrapper[4937]: I0123 08:40:20.073890 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fswtw"] Jan 23 08:40:20 crc kubenswrapper[4937]: I0123 08:40:20.741873 4937 generic.go:334] "Generic (PLEG): container finished" podID="6a73d206-aed0-452a-9df3-08e7b5808489" containerID="a6d1163ba44551db3f27f64bf6b7862d7bfd8ffd04fd69baac78c33baac74d0b" exitCode=0 Jan 23 08:40:20 crc kubenswrapper[4937]: I0123 08:40:20.741923 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fswtw" event={"ID":"6a73d206-aed0-452a-9df3-08e7b5808489","Type":"ContainerDied","Data":"a6d1163ba44551db3f27f64bf6b7862d7bfd8ffd04fd69baac78c33baac74d0b"} Jan 23 08:40:20 crc kubenswrapper[4937]: I0123 08:40:20.742518 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fswtw" event={"ID":"6a73d206-aed0-452a-9df3-08e7b5808489","Type":"ContainerStarted","Data":"dc72b377d8b82a9155374d7b44a7cdc5dea461fe98e628d3432a3b6f2958efad"} Jan 23 08:40:22 crc kubenswrapper[4937]: I0123 08:40:22.759705 4937 generic.go:334] "Generic (PLEG): container finished" podID="6a73d206-aed0-452a-9df3-08e7b5808489" containerID="2ec2ac3a3b6b4b930a19edb164a4a3ffa9713f7c6219b5efad79b32d8c27750c" exitCode=0 Jan 23 08:40:22 crc kubenswrapper[4937]: I0123 08:40:22.760462 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fswtw" event={"ID":"6a73d206-aed0-452a-9df3-08e7b5808489","Type":"ContainerDied","Data":"2ec2ac3a3b6b4b930a19edb164a4a3ffa9713f7c6219b5efad79b32d8c27750c"} Jan 23 08:40:23 crc kubenswrapper[4937]: I0123 08:40:23.526709 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:40:23 crc kubenswrapper[4937]: E0123 08:40:23.527223 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:40:23 crc kubenswrapper[4937]: I0123 08:40:23.782191 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fswtw" event={"ID":"6a73d206-aed0-452a-9df3-08e7b5808489","Type":"ContainerStarted","Data":"bf8a91d1652cb5b9fed814e3acb615db8afda12843d9916e8966ca0eb7eeb371"} Jan 23 08:40:23 crc kubenswrapper[4937]: I0123 08:40:23.806223 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fswtw" podStartSLOduration=2.317508199 podStartE2EDuration="4.80620153s" podCreationTimestamp="2026-01-23 08:40:19 +0000 UTC" firstStartedPulling="2026-01-23 08:40:20.744129139 +0000 UTC m=+7620.547895792" lastFinishedPulling="2026-01-23 08:40:23.23282247 +0000 UTC m=+7623.036589123" observedRunningTime="2026-01-23 08:40:23.799313013 +0000 UTC m=+7623.603079666" watchObservedRunningTime="2026-01-23 08:40:23.80620153 +0000 UTC m=+7623.609968183" Jan 23 08:40:29 crc kubenswrapper[4937]: I0123 08:40:29.532095 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fswtw" Jan 23 08:40:29 crc kubenswrapper[4937]: I0123 08:40:29.534435 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fswtw" Jan 23 08:40:29 crc kubenswrapper[4937]: I0123 08:40:29.620104 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fswtw" Jan 23 08:40:29 crc kubenswrapper[4937]: I0123 08:40:29.900626 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fswtw" Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.100443 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fswtw"] Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.102161 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fswtw" podUID="6a73d206-aed0-452a-9df3-08e7b5808489" containerName="registry-server" containerID="cri-o://bf8a91d1652cb5b9fed814e3acb615db8afda12843d9916e8966ca0eb7eeb371" gracePeriod=2 Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.606840 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fswtw" Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.737756 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a73d206-aed0-452a-9df3-08e7b5808489-utilities\") pod \"6a73d206-aed0-452a-9df3-08e7b5808489\" (UID: \"6a73d206-aed0-452a-9df3-08e7b5808489\") " Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.738255 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knfhx\" (UniqueName: \"kubernetes.io/projected/6a73d206-aed0-452a-9df3-08e7b5808489-kube-api-access-knfhx\") pod \"6a73d206-aed0-452a-9df3-08e7b5808489\" (UID: \"6a73d206-aed0-452a-9df3-08e7b5808489\") " Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.738380 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a73d206-aed0-452a-9df3-08e7b5808489-catalog-content\") pod \"6a73d206-aed0-452a-9df3-08e7b5808489\" (UID: \"6a73d206-aed0-452a-9df3-08e7b5808489\") " Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.738541 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a73d206-aed0-452a-9df3-08e7b5808489-utilities" (OuterVolumeSpecName: "utilities") pod "6a73d206-aed0-452a-9df3-08e7b5808489" (UID: "6a73d206-aed0-452a-9df3-08e7b5808489"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.739118 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a73d206-aed0-452a-9df3-08e7b5808489-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.744432 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a73d206-aed0-452a-9df3-08e7b5808489-kube-api-access-knfhx" (OuterVolumeSpecName: "kube-api-access-knfhx") pod "6a73d206-aed0-452a-9df3-08e7b5808489" (UID: "6a73d206-aed0-452a-9df3-08e7b5808489"). InnerVolumeSpecName "kube-api-access-knfhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.798135 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a73d206-aed0-452a-9df3-08e7b5808489-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a73d206-aed0-452a-9df3-08e7b5808489" (UID: "6a73d206-aed0-452a-9df3-08e7b5808489"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.841267 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knfhx\" (UniqueName: \"kubernetes.io/projected/6a73d206-aed0-452a-9df3-08e7b5808489-kube-api-access-knfhx\") on node \"crc\" DevicePath \"\"" Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.841631 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a73d206-aed0-452a-9df3-08e7b5808489-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.885231 4937 generic.go:334] "Generic (PLEG): container finished" podID="6a73d206-aed0-452a-9df3-08e7b5808489" containerID="bf8a91d1652cb5b9fed814e3acb615db8afda12843d9916e8966ca0eb7eeb371" exitCode=0 Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.885283 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fswtw" event={"ID":"6a73d206-aed0-452a-9df3-08e7b5808489","Type":"ContainerDied","Data":"bf8a91d1652cb5b9fed814e3acb615db8afda12843d9916e8966ca0eb7eeb371"} Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.885312 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fswtw" event={"ID":"6a73d206-aed0-452a-9df3-08e7b5808489","Type":"ContainerDied","Data":"dc72b377d8b82a9155374d7b44a7cdc5dea461fe98e628d3432a3b6f2958efad"} Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.885332 4937 scope.go:117] "RemoveContainer" containerID="bf8a91d1652cb5b9fed814e3acb615db8afda12843d9916e8966ca0eb7eeb371" Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.885490 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fswtw" Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.915139 4937 scope.go:117] "RemoveContainer" containerID="2ec2ac3a3b6b4b930a19edb164a4a3ffa9713f7c6219b5efad79b32d8c27750c" Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.947730 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fswtw"] Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.954985 4937 scope.go:117] "RemoveContainer" containerID="a6d1163ba44551db3f27f64bf6b7862d7bfd8ffd04fd69baac78c33baac74d0b" Jan 23 08:40:33 crc kubenswrapper[4937]: I0123 08:40:33.970818 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fswtw"] Jan 23 08:40:34 crc kubenswrapper[4937]: I0123 08:40:34.016718 4937 scope.go:117] "RemoveContainer" containerID="bf8a91d1652cb5b9fed814e3acb615db8afda12843d9916e8966ca0eb7eeb371" Jan 23 08:40:34 crc kubenswrapper[4937]: E0123 08:40:34.022057 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf8a91d1652cb5b9fed814e3acb615db8afda12843d9916e8966ca0eb7eeb371\": container with ID starting with bf8a91d1652cb5b9fed814e3acb615db8afda12843d9916e8966ca0eb7eeb371 not found: ID does not exist" containerID="bf8a91d1652cb5b9fed814e3acb615db8afda12843d9916e8966ca0eb7eeb371" Jan 23 08:40:34 crc kubenswrapper[4937]: I0123 08:40:34.022117 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8a91d1652cb5b9fed814e3acb615db8afda12843d9916e8966ca0eb7eeb371"} err="failed to get container status \"bf8a91d1652cb5b9fed814e3acb615db8afda12843d9916e8966ca0eb7eeb371\": rpc error: code = NotFound desc = could not find container \"bf8a91d1652cb5b9fed814e3acb615db8afda12843d9916e8966ca0eb7eeb371\": container with ID starting with bf8a91d1652cb5b9fed814e3acb615db8afda12843d9916e8966ca0eb7eeb371 not found: ID does not exist" Jan 23 08:40:34 crc kubenswrapper[4937]: I0123 08:40:34.022149 4937 scope.go:117] "RemoveContainer" containerID="2ec2ac3a3b6b4b930a19edb164a4a3ffa9713f7c6219b5efad79b32d8c27750c" Jan 23 08:40:34 crc kubenswrapper[4937]: E0123 08:40:34.028043 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ec2ac3a3b6b4b930a19edb164a4a3ffa9713f7c6219b5efad79b32d8c27750c\": container with ID starting with 2ec2ac3a3b6b4b930a19edb164a4a3ffa9713f7c6219b5efad79b32d8c27750c not found: ID does not exist" containerID="2ec2ac3a3b6b4b930a19edb164a4a3ffa9713f7c6219b5efad79b32d8c27750c" Jan 23 08:40:34 crc kubenswrapper[4937]: I0123 08:40:34.028106 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ec2ac3a3b6b4b930a19edb164a4a3ffa9713f7c6219b5efad79b32d8c27750c"} err="failed to get container status \"2ec2ac3a3b6b4b930a19edb164a4a3ffa9713f7c6219b5efad79b32d8c27750c\": rpc error: code = NotFound desc = could not find container \"2ec2ac3a3b6b4b930a19edb164a4a3ffa9713f7c6219b5efad79b32d8c27750c\": container with ID starting with 2ec2ac3a3b6b4b930a19edb164a4a3ffa9713f7c6219b5efad79b32d8c27750c not found: ID does not exist" Jan 23 08:40:34 crc kubenswrapper[4937]: I0123 08:40:34.028139 4937 scope.go:117] "RemoveContainer" containerID="a6d1163ba44551db3f27f64bf6b7862d7bfd8ffd04fd69baac78c33baac74d0b" Jan 23 08:40:34 crc kubenswrapper[4937]: E0123 08:40:34.031936 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6d1163ba44551db3f27f64bf6b7862d7bfd8ffd04fd69baac78c33baac74d0b\": container with ID starting with a6d1163ba44551db3f27f64bf6b7862d7bfd8ffd04fd69baac78c33baac74d0b not found: ID does not exist" containerID="a6d1163ba44551db3f27f64bf6b7862d7bfd8ffd04fd69baac78c33baac74d0b" Jan 23 08:40:34 crc kubenswrapper[4937]: I0123 08:40:34.031992 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6d1163ba44551db3f27f64bf6b7862d7bfd8ffd04fd69baac78c33baac74d0b"} err="failed to get container status \"a6d1163ba44551db3f27f64bf6b7862d7bfd8ffd04fd69baac78c33baac74d0b\": rpc error: code = NotFound desc = could not find container \"a6d1163ba44551db3f27f64bf6b7862d7bfd8ffd04fd69baac78c33baac74d0b\": container with ID starting with a6d1163ba44551db3f27f64bf6b7862d7bfd8ffd04fd69baac78c33baac74d0b not found: ID does not exist" Jan 23 08:40:34 crc kubenswrapper[4937]: I0123 08:40:34.535878 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a73d206-aed0-452a-9df3-08e7b5808489" path="/var/lib/kubelet/pods/6a73d206-aed0-452a-9df3-08e7b5808489/volumes" Jan 23 08:40:38 crc kubenswrapper[4937]: I0123 08:40:38.526407 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:40:38 crc kubenswrapper[4937]: E0123 08:40:38.527354 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:40:50 crc kubenswrapper[4937]: I0123 08:40:50.536037 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:40:50 crc kubenswrapper[4937]: E0123 08:40:50.537131 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:41:05 crc kubenswrapper[4937]: I0123 08:41:05.527875 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:41:05 crc kubenswrapper[4937]: E0123 08:41:05.529205 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:41:05 crc kubenswrapper[4937]: I0123 08:41:05.831167 4937 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sttl9"] Jan 23 08:41:05 crc kubenswrapper[4937]: E0123 08:41:05.831946 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a73d206-aed0-452a-9df3-08e7b5808489" containerName="extract-content" Jan 23 08:41:05 crc kubenswrapper[4937]: I0123 08:41:05.831973 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a73d206-aed0-452a-9df3-08e7b5808489" containerName="extract-content" Jan 23 08:41:05 crc kubenswrapper[4937]: E0123 08:41:05.832012 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a73d206-aed0-452a-9df3-08e7b5808489" containerName="registry-server" Jan 23 08:41:05 crc kubenswrapper[4937]: I0123 08:41:05.832021 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a73d206-aed0-452a-9df3-08e7b5808489" containerName="registry-server" Jan 23 08:41:05 crc kubenswrapper[4937]: E0123 08:41:05.832057 4937 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a73d206-aed0-452a-9df3-08e7b5808489" containerName="extract-utilities" Jan 23 08:41:05 crc kubenswrapper[4937]: I0123 08:41:05.832067 4937 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a73d206-aed0-452a-9df3-08e7b5808489" containerName="extract-utilities" Jan 23 08:41:05 crc kubenswrapper[4937]: I0123 08:41:05.832354 4937 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a73d206-aed0-452a-9df3-08e7b5808489" containerName="registry-server" Jan 23 08:41:05 crc kubenswrapper[4937]: I0123 08:41:05.834704 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sttl9" Jan 23 08:41:05 crc kubenswrapper[4937]: I0123 08:41:05.852308 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sttl9"] Jan 23 08:41:05 crc kubenswrapper[4937]: I0123 08:41:05.871138 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b07e3d97-2142-41c4-8c21-089ce1ac4dfe-utilities\") pod \"certified-operators-sttl9\" (UID: \"b07e3d97-2142-41c4-8c21-089ce1ac4dfe\") " pod="openshift-marketplace/certified-operators-sttl9" Jan 23 08:41:05 crc kubenswrapper[4937]: I0123 08:41:05.871454 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b07e3d97-2142-41c4-8c21-089ce1ac4dfe-catalog-content\") pod \"certified-operators-sttl9\" (UID: \"b07e3d97-2142-41c4-8c21-089ce1ac4dfe\") " pod="openshift-marketplace/certified-operators-sttl9" Jan 23 08:41:05 crc kubenswrapper[4937]: I0123 08:41:05.871688 4937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsc2p\" (UniqueName: \"kubernetes.io/projected/b07e3d97-2142-41c4-8c21-089ce1ac4dfe-kube-api-access-nsc2p\") pod \"certified-operators-sttl9\" (UID: \"b07e3d97-2142-41c4-8c21-089ce1ac4dfe\") " pod="openshift-marketplace/certified-operators-sttl9" Jan 23 08:41:05 crc kubenswrapper[4937]: I0123 08:41:05.972850 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b07e3d97-2142-41c4-8c21-089ce1ac4dfe-utilities\") pod \"certified-operators-sttl9\" (UID: \"b07e3d97-2142-41c4-8c21-089ce1ac4dfe\") " pod="openshift-marketplace/certified-operators-sttl9" Jan 23 08:41:05 crc kubenswrapper[4937]: I0123 08:41:05.972971 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b07e3d97-2142-41c4-8c21-089ce1ac4dfe-catalog-content\") pod \"certified-operators-sttl9\" (UID: \"b07e3d97-2142-41c4-8c21-089ce1ac4dfe\") " pod="openshift-marketplace/certified-operators-sttl9" Jan 23 08:41:05 crc kubenswrapper[4937]: I0123 08:41:05.973049 4937 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsc2p\" (UniqueName: \"kubernetes.io/projected/b07e3d97-2142-41c4-8c21-089ce1ac4dfe-kube-api-access-nsc2p\") pod \"certified-operators-sttl9\" (UID: \"b07e3d97-2142-41c4-8c21-089ce1ac4dfe\") " pod="openshift-marketplace/certified-operators-sttl9" Jan 23 08:41:05 crc kubenswrapper[4937]: I0123 08:41:05.973440 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b07e3d97-2142-41c4-8c21-089ce1ac4dfe-utilities\") pod \"certified-operators-sttl9\" (UID: \"b07e3d97-2142-41c4-8c21-089ce1ac4dfe\") " pod="openshift-marketplace/certified-operators-sttl9" Jan 23 08:41:05 crc kubenswrapper[4937]: I0123 08:41:05.973509 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b07e3d97-2142-41c4-8c21-089ce1ac4dfe-catalog-content\") pod \"certified-operators-sttl9\" (UID: \"b07e3d97-2142-41c4-8c21-089ce1ac4dfe\") " pod="openshift-marketplace/certified-operators-sttl9" Jan 23 08:41:06 crc kubenswrapper[4937]: I0123 08:41:06.008849 4937 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsc2p\" (UniqueName: \"kubernetes.io/projected/b07e3d97-2142-41c4-8c21-089ce1ac4dfe-kube-api-access-nsc2p\") pod \"certified-operators-sttl9\" (UID: \"b07e3d97-2142-41c4-8c21-089ce1ac4dfe\") " pod="openshift-marketplace/certified-operators-sttl9" Jan 23 08:41:06 crc kubenswrapper[4937]: I0123 08:41:06.165856 4937 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sttl9" Jan 23 08:41:06 crc kubenswrapper[4937]: I0123 08:41:06.855013 4937 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sttl9"] Jan 23 08:41:07 crc kubenswrapper[4937]: I0123 08:41:07.192846 4937 generic.go:334] "Generic (PLEG): container finished" podID="b07e3d97-2142-41c4-8c21-089ce1ac4dfe" containerID="41c85a285a012f33e1187a589fe35632360b63a553a247df1a318abad6cf8280" exitCode=0 Jan 23 08:41:07 crc kubenswrapper[4937]: I0123 08:41:07.192947 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sttl9" event={"ID":"b07e3d97-2142-41c4-8c21-089ce1ac4dfe","Type":"ContainerDied","Data":"41c85a285a012f33e1187a589fe35632360b63a553a247df1a318abad6cf8280"} Jan 23 08:41:07 crc kubenswrapper[4937]: I0123 08:41:07.193145 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sttl9" event={"ID":"b07e3d97-2142-41c4-8c21-089ce1ac4dfe","Type":"ContainerStarted","Data":"5b8543774650745186a5643bd4d1ada0346e9d72f50de42586ef5430f67ca009"} Jan 23 08:41:09 crc kubenswrapper[4937]: I0123 08:41:09.213004 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sttl9" event={"ID":"b07e3d97-2142-41c4-8c21-089ce1ac4dfe","Type":"ContainerStarted","Data":"ab1176542b7b2183fa6d472f3dea0a99506c2e7b63910c31e6c9873c49805a71"} Jan 23 08:41:09 crc kubenswrapper[4937]: E0123 08:41:09.394945 4937 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb07e3d97_2142_41c4_8c21_089ce1ac4dfe.slice/crio-ab1176542b7b2183fa6d472f3dea0a99506c2e7b63910c31e6c9873c49805a71.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb07e3d97_2142_41c4_8c21_089ce1ac4dfe.slice/crio-conmon-ab1176542b7b2183fa6d472f3dea0a99506c2e7b63910c31e6c9873c49805a71.scope\": RecentStats: unable to find data in memory cache]" Jan 23 08:41:10 crc kubenswrapper[4937]: I0123 08:41:10.231259 4937 generic.go:334] "Generic (PLEG): container finished" podID="b07e3d97-2142-41c4-8c21-089ce1ac4dfe" containerID="ab1176542b7b2183fa6d472f3dea0a99506c2e7b63910c31e6c9873c49805a71" exitCode=0 Jan 23 08:41:10 crc kubenswrapper[4937]: I0123 08:41:10.231331 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sttl9" event={"ID":"b07e3d97-2142-41c4-8c21-089ce1ac4dfe","Type":"ContainerDied","Data":"ab1176542b7b2183fa6d472f3dea0a99506c2e7b63910c31e6c9873c49805a71"} Jan 23 08:41:11 crc kubenswrapper[4937]: I0123 08:41:11.242755 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sttl9" event={"ID":"b07e3d97-2142-41c4-8c21-089ce1ac4dfe","Type":"ContainerStarted","Data":"d8128761a701154efe8f58f8016893eb04092aabcd4b6eeed263d3b1b4685cdd"} Jan 23 08:41:11 crc kubenswrapper[4937]: I0123 08:41:11.262447 4937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sttl9" podStartSLOduration=2.722138651 podStartE2EDuration="6.2624312s" podCreationTimestamp="2026-01-23 08:41:05 +0000 UTC" firstStartedPulling="2026-01-23 08:41:07.194938613 +0000 UTC m=+7666.998705266" lastFinishedPulling="2026-01-23 08:41:10.735231162 +0000 UTC m=+7670.538997815" observedRunningTime="2026-01-23 08:41:11.258408981 +0000 UTC m=+7671.062175644" watchObservedRunningTime="2026-01-23 08:41:11.2624312 +0000 UTC m=+7671.066197853" Jan 23 08:41:16 crc kubenswrapper[4937]: I0123 08:41:16.166197 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sttl9" Jan 23 08:41:16 crc kubenswrapper[4937]: I0123 08:41:16.166740 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sttl9" Jan 23 08:41:16 crc kubenswrapper[4937]: I0123 08:41:16.211828 4937 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sttl9" Jan 23 08:41:16 crc kubenswrapper[4937]: I0123 08:41:16.354943 4937 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sttl9" Jan 23 08:41:17 crc kubenswrapper[4937]: I0123 08:41:17.409174 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sttl9"] Jan 23 08:41:18 crc kubenswrapper[4937]: I0123 08:41:18.326701 4937 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sttl9" podUID="b07e3d97-2142-41c4-8c21-089ce1ac4dfe" containerName="registry-server" containerID="cri-o://d8128761a701154efe8f58f8016893eb04092aabcd4b6eeed263d3b1b4685cdd" gracePeriod=2 Jan 23 08:41:18 crc kubenswrapper[4937]: I0123 08:41:18.837463 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sttl9" Jan 23 08:41:18 crc kubenswrapper[4937]: I0123 08:41:18.966359 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b07e3d97-2142-41c4-8c21-089ce1ac4dfe-catalog-content\") pod \"b07e3d97-2142-41c4-8c21-089ce1ac4dfe\" (UID: \"b07e3d97-2142-41c4-8c21-089ce1ac4dfe\") " Jan 23 08:41:18 crc kubenswrapper[4937]: I0123 08:41:18.966638 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsc2p\" (UniqueName: \"kubernetes.io/projected/b07e3d97-2142-41c4-8c21-089ce1ac4dfe-kube-api-access-nsc2p\") pod \"b07e3d97-2142-41c4-8c21-089ce1ac4dfe\" (UID: \"b07e3d97-2142-41c4-8c21-089ce1ac4dfe\") " Jan 23 08:41:18 crc kubenswrapper[4937]: I0123 08:41:18.966717 4937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b07e3d97-2142-41c4-8c21-089ce1ac4dfe-utilities\") pod \"b07e3d97-2142-41c4-8c21-089ce1ac4dfe\" (UID: \"b07e3d97-2142-41c4-8c21-089ce1ac4dfe\") " Jan 23 08:41:18 crc kubenswrapper[4937]: I0123 08:41:18.967847 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b07e3d97-2142-41c4-8c21-089ce1ac4dfe-utilities" (OuterVolumeSpecName: "utilities") pod "b07e3d97-2142-41c4-8c21-089ce1ac4dfe" (UID: "b07e3d97-2142-41c4-8c21-089ce1ac4dfe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:41:18 crc kubenswrapper[4937]: I0123 08:41:18.972554 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b07e3d97-2142-41c4-8c21-089ce1ac4dfe-kube-api-access-nsc2p" (OuterVolumeSpecName: "kube-api-access-nsc2p") pod "b07e3d97-2142-41c4-8c21-089ce1ac4dfe" (UID: "b07e3d97-2142-41c4-8c21-089ce1ac4dfe"). InnerVolumeSpecName "kube-api-access-nsc2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:41:19 crc kubenswrapper[4937]: I0123 08:41:19.026926 4937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b07e3d97-2142-41c4-8c21-089ce1ac4dfe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b07e3d97-2142-41c4-8c21-089ce1ac4dfe" (UID: "b07e3d97-2142-41c4-8c21-089ce1ac4dfe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:41:19 crc kubenswrapper[4937]: I0123 08:41:19.069234 4937 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsc2p\" (UniqueName: \"kubernetes.io/projected/b07e3d97-2142-41c4-8c21-089ce1ac4dfe-kube-api-access-nsc2p\") on node \"crc\" DevicePath \"\"" Jan 23 08:41:19 crc kubenswrapper[4937]: I0123 08:41:19.069277 4937 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b07e3d97-2142-41c4-8c21-089ce1ac4dfe-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:41:19 crc kubenswrapper[4937]: I0123 08:41:19.069286 4937 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b07e3d97-2142-41c4-8c21-089ce1ac4dfe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:41:19 crc kubenswrapper[4937]: I0123 08:41:19.337398 4937 generic.go:334] "Generic (PLEG): container finished" podID="b07e3d97-2142-41c4-8c21-089ce1ac4dfe" containerID="d8128761a701154efe8f58f8016893eb04092aabcd4b6eeed263d3b1b4685cdd" exitCode=0 Jan 23 08:41:19 crc kubenswrapper[4937]: I0123 08:41:19.337464 4937 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sttl9" Jan 23 08:41:19 crc kubenswrapper[4937]: I0123 08:41:19.337493 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sttl9" event={"ID":"b07e3d97-2142-41c4-8c21-089ce1ac4dfe","Type":"ContainerDied","Data":"d8128761a701154efe8f58f8016893eb04092aabcd4b6eeed263d3b1b4685cdd"} Jan 23 08:41:19 crc kubenswrapper[4937]: I0123 08:41:19.337928 4937 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sttl9" event={"ID":"b07e3d97-2142-41c4-8c21-089ce1ac4dfe","Type":"ContainerDied","Data":"5b8543774650745186a5643bd4d1ada0346e9d72f50de42586ef5430f67ca009"} Jan 23 08:41:19 crc kubenswrapper[4937]: I0123 08:41:19.337960 4937 scope.go:117] "RemoveContainer" containerID="d8128761a701154efe8f58f8016893eb04092aabcd4b6eeed263d3b1b4685cdd" Jan 23 08:41:19 crc kubenswrapper[4937]: I0123 08:41:19.359211 4937 scope.go:117] "RemoveContainer" containerID="ab1176542b7b2183fa6d472f3dea0a99506c2e7b63910c31e6c9873c49805a71" Jan 23 08:41:19 crc kubenswrapper[4937]: I0123 08:41:19.373917 4937 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sttl9"] Jan 23 08:41:19 crc kubenswrapper[4937]: I0123 08:41:19.382993 4937 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sttl9"] Jan 23 08:41:19 crc kubenswrapper[4937]: I0123 08:41:19.398442 4937 scope.go:117] "RemoveContainer" containerID="41c85a285a012f33e1187a589fe35632360b63a553a247df1a318abad6cf8280" Jan 23 08:41:19 crc kubenswrapper[4937]: I0123 08:41:19.436577 4937 scope.go:117] "RemoveContainer" containerID="d8128761a701154efe8f58f8016893eb04092aabcd4b6eeed263d3b1b4685cdd" Jan 23 08:41:19 crc kubenswrapper[4937]: E0123 08:41:19.437212 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8128761a701154efe8f58f8016893eb04092aabcd4b6eeed263d3b1b4685cdd\": container with ID starting with d8128761a701154efe8f58f8016893eb04092aabcd4b6eeed263d3b1b4685cdd not found: ID does not exist" containerID="d8128761a701154efe8f58f8016893eb04092aabcd4b6eeed263d3b1b4685cdd" Jan 23 08:41:19 crc kubenswrapper[4937]: I0123 08:41:19.437283 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8128761a701154efe8f58f8016893eb04092aabcd4b6eeed263d3b1b4685cdd"} err="failed to get container status \"d8128761a701154efe8f58f8016893eb04092aabcd4b6eeed263d3b1b4685cdd\": rpc error: code = NotFound desc = could not find container \"d8128761a701154efe8f58f8016893eb04092aabcd4b6eeed263d3b1b4685cdd\": container with ID starting with d8128761a701154efe8f58f8016893eb04092aabcd4b6eeed263d3b1b4685cdd not found: ID does not exist" Jan 23 08:41:19 crc kubenswrapper[4937]: I0123 08:41:19.437327 4937 scope.go:117] "RemoveContainer" containerID="ab1176542b7b2183fa6d472f3dea0a99506c2e7b63910c31e6c9873c49805a71" Jan 23 08:41:19 crc kubenswrapper[4937]: E0123 08:41:19.438019 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab1176542b7b2183fa6d472f3dea0a99506c2e7b63910c31e6c9873c49805a71\": container with ID starting with ab1176542b7b2183fa6d472f3dea0a99506c2e7b63910c31e6c9873c49805a71 not found: ID does not exist" containerID="ab1176542b7b2183fa6d472f3dea0a99506c2e7b63910c31e6c9873c49805a71" Jan 23 08:41:19 crc kubenswrapper[4937]: I0123 08:41:19.438055 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab1176542b7b2183fa6d472f3dea0a99506c2e7b63910c31e6c9873c49805a71"} err="failed to get container status \"ab1176542b7b2183fa6d472f3dea0a99506c2e7b63910c31e6c9873c49805a71\": rpc error: code = NotFound desc = could not find container \"ab1176542b7b2183fa6d472f3dea0a99506c2e7b63910c31e6c9873c49805a71\": container with ID starting with ab1176542b7b2183fa6d472f3dea0a99506c2e7b63910c31e6c9873c49805a71 not found: ID does not exist" Jan 23 08:41:19 crc kubenswrapper[4937]: I0123 08:41:19.438078 4937 scope.go:117] "RemoveContainer" containerID="41c85a285a012f33e1187a589fe35632360b63a553a247df1a318abad6cf8280" Jan 23 08:41:19 crc kubenswrapper[4937]: E0123 08:41:19.438748 4937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41c85a285a012f33e1187a589fe35632360b63a553a247df1a318abad6cf8280\": container with ID starting with 41c85a285a012f33e1187a589fe35632360b63a553a247df1a318abad6cf8280 not found: ID does not exist" containerID="41c85a285a012f33e1187a589fe35632360b63a553a247df1a318abad6cf8280" Jan 23 08:41:19 crc kubenswrapper[4937]: I0123 08:41:19.438804 4937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41c85a285a012f33e1187a589fe35632360b63a553a247df1a318abad6cf8280"} err="failed to get container status \"41c85a285a012f33e1187a589fe35632360b63a553a247df1a318abad6cf8280\": rpc error: code = NotFound desc = could not find container \"41c85a285a012f33e1187a589fe35632360b63a553a247df1a318abad6cf8280\": container with ID starting with 41c85a285a012f33e1187a589fe35632360b63a553a247df1a318abad6cf8280 not found: ID does not exist" Jan 23 08:41:20 crc kubenswrapper[4937]: I0123 08:41:20.538432 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:41:20 crc kubenswrapper[4937]: I0123 08:41:20.538934 4937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b07e3d97-2142-41c4-8c21-089ce1ac4dfe" path="/var/lib/kubelet/pods/b07e3d97-2142-41c4-8c21-089ce1ac4dfe/volumes" Jan 23 08:41:20 crc kubenswrapper[4937]: E0123 08:41:20.539338 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:41:34 crc kubenswrapper[4937]: I0123 08:41:34.526867 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:41:34 crc kubenswrapper[4937]: E0123 08:41:34.527514 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:41:47 crc kubenswrapper[4937]: I0123 08:41:47.527778 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:41:47 crc kubenswrapper[4937]: E0123 08:41:47.528491 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:41:59 crc kubenswrapper[4937]: I0123 08:41:59.526090 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:41:59 crc kubenswrapper[4937]: E0123 08:41:59.528325 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:42:10 crc kubenswrapper[4937]: I0123 08:42:10.533823 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:42:10 crc kubenswrapper[4937]: E0123 08:42:10.534713 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:42:22 crc kubenswrapper[4937]: I0123 08:42:22.527891 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:42:22 crc kubenswrapper[4937]: E0123 08:42:22.530034 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" Jan 23 08:42:33 crc kubenswrapper[4937]: I0123 08:42:33.526741 4937 scope.go:117] "RemoveContainer" containerID="ec0a7588ad337bb3a04257e1134ba2b591e3015ed8f15802d8c8ab74e7954fd2" Jan 23 08:42:33 crc kubenswrapper[4937]: E0123 08:42:33.527526 4937 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bglvs_openshift-machine-config-operator(2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9)\"" pod="openshift-machine-config-operator/machine-config-daemon-bglvs" podUID="2b4b58f5-f70b-4cf3-a5a1-fca93ab81ec9" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515134632204024445 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015134632205017363 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015134612515016510 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015134612515015460 5ustar corecore